Dawid:
With the new test runner you created, would it be possible to setup an
annotation that we could use instead to indicate that a test should in
fact be run, and if it fails, include the failure info in the build
report, but do not fail the build?
I'm thinking in particular about some o
: as a build artifact). Yet another problem is that jenkins wouldn't
: _fail_ on such pseudo-failures because the set of JUnit statuses is
: not extensible (it'd be something like FAILED+IGNORE) so we'd need to
That was really the main question i had, as someone not very familiar with
the intern
: This is doable by enabling/disabling test groups. A new build plan
: would need to be created that would do:
:
: ant -Dtests.haltonfailure=false -Dtests.awaitsfix=true
: -Dtests.unstable=true test
right ...that's an idea that came up the other day when i was talking to
simon at revolution
an
: From:
:
http://jenkins.sd-datasolutions.de/job/Windows-Lucene-Solr-tests-only-trunk/63/testReport/junit/org.apache.solr.update/SoftAutoCommitTest/testSoftAndHardCommitMaxTimeMixedAdds/
:
: soft529 occured too fast: 1337365513160 + 500 !<= 1337365513589
:
: Is 'too fast' really an error?
:
:
:
: Click the down arrow (options, to the far right side of the
: attachments section), then choose "manage attachments" and you can see
: the apache icon beside all attachments on the issue.
For quick comparison...
https://issues.apache.org/jira/browse/SOLR-3499
https://issues.apache.org/jira/s
: ...i'll file an INFRA Jira to see if we can get this back on the main
: issue screen.
Scratch that ... It was already reported and Infra evidently
considers the matter resolved...
https://issues.apache.org/jira/browse/INFRA-4842
-Hoss
---
: LUCENE-:
: Fixed a horrible nasty bug. (Joe Contributor via John Doe Committer)
:
: I propose we remove "via " from CHANGES.txt. I don't
FWIW: as first glance i thought you were suggesting that we should only
use "(Joe Contributor, John Doe Committer)" ... which i would disagree
with b
: I've looked at the "via" in the changelog to figure out which committer
: works in which areas the most, and therefore who to ping about a patch.
That's a user for the info that i hadn't really considered, and definitely
gives me pause...
I guess i'm changing my opinion: -0.
-Hoss
---
Wait a minute ... why do we still have all of these
solr/contrib/*/CHANGES.txt files? ... i thought we decided a long time
ago to consolidate everything into ./lucene/CHANGES.txt and
./solr/CHANGES.txt ?
: $ find . -name CHANGES.txt
: ./lucene/CHANGES.txt
: ./solr/CHANGES.txt
: ./solr/contri
:
: NOTE: I definitely don't want to discourage you from tackling this
: issue, but I think its fair to mention there is a workaround, and
: thats if you can preprocess your queries yourself (maybe you dont
: allow all the lucene syntax to your users or something like that), you
: can escape the w
: In my opinion, the separate JVMs should not produce test failures or
: affect each other, because every JVM gets its own temporary directory
: for running tests and creating indexes.
I don't think anyone would disagree with that opinion -- but having a
common opinion doesn't magically make i
: [javac]
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-tests-only-4.x/checkout/solr/core/src/test/org/apache/solr/search/QueryEqualityTest.java:753:
class, interface, or enum expected
: [javac] package org.apache.solr.search;
: [javac] ^
This seems to be the problem Uwe menti
: Go on Webpage, login, choose job and finally "Workspace". There is a wipe
option.
awesome, thanks. i wiped the workspaces for all of the solr related 4x
builds.
what about the root cause? ...
: Uwe: do you have any more info about how/why this happens? is there a bug
: for Jenkins/Infra/s
: Subject: Custom distributed SearchHandler: where do I store information in the
: ResponseBuilder from handleResponses->finishStage
...
: Facets stores its data in a field in ResponseBuilder:
: public FacetComponent.FacetInfo _facetInfo;
:
: I could add my own field, but it feels
: Anyone know how to print out, at the end of a long run of tests, the
: names of the actual tests that fail to the console? It's a pain to
I typically use...
grep -rL 'errors="0" failures="0"' test_output_dir
(where test_output_dir depends on which set of tests failed)
however two
: In Junit4, there's a TestWatchman class that has a failed method that's
: called whenever a test fails. It *seems* like it would be possible to
: gather them all up and print them at the end, but I confess I have no
: clue how, and don't have time to look now.
interesting...
the intent of Test
: +1, this is the place to do it. you can get the test stats in
: endTestSuite(JUnitTest suite)
: and append it to a file (set from a sysprop).
:
: then later in the build, when the "tests.failed" sysprop is set, we
: cat the file.
I hadn't even though of that ... i was assuming we'd need to imp
: i just added a patch to SOLR-2002 so that we only check the
: "tests.failed" once so you wont see it a ton of times... or the BUILD
: FAILED! over and over and over again.
I actually thought that was intentional ... the a test didn't just "FAIL"
several tests "FAILED FAILED FAILED!!"
(kin
: I don't think we should do this.
:
: because the hudson jail has 'tcp blackhole' set, tests should *never*
: depend on connecting to a port not accepting connections and getting
: an RST (it won't happen).
How should we deal with tests where the entire point of hte code being
tested is dealin
: - Send emails to which address
There's no requirement that Hudson builds send email to anyone at all.
People who are interested in specific branches can alwyas subscribe to the
RSS feed for that branch.
There are also options to only have hudson send emails to the specific
individuals who
: I find it onerous that one need do a merge for this kind of thing
: period. Why not just apply the patch a second time? Sure, something is
: lost in SVN, but it's covered elsewhere. Of course, the flip side is
: that by not doing it, it becomes all that much harder to merge in the
: futur
: FWIW, I get the sense that a lot of other projects deal with merges. What do
they do?
I suspect they do merges properly and avoid this problem entirely.
Bottom Line: if *all* merges happen at the top level, then this problem
won't exist -- mergeinfo props get added to individual files only
FYI: I started seeing this error when Miller committed SolrCloud to trunk
and included the slf4j log4j bridge jar -- the root cause was that i had
a very old version of log4j in my classpath (as part of a custom ant
install)
: Date: Sat, 16 Oct 2010 05:44:03 -0400
: From: Michael McCandless
: Now we are nocommit-free on trunk & 3.x.
FWIW: we can make hudson fail the build if "nocommit" is found anywhere in
the source.
Just an idea if people are intersted.
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.a
: As part of https://issues.apache.org/jira/browse/SOLR-2080, I'd like to
: rework the SpellCheckComponent just a bit to be more generic. I think I
: can maintain the URL APIs (i.e. &spellcheck.*) in a back compatible way,
: but I would like change some of the Java classes a bit, namely
: Does anyone recall why the Carrot2 stuff is disabled by default in the
: Solr example? If my memory serves, it was due to the licensing issues
: that required the user to download certain libs. Was there any other?
: In other words, I'd like to enable it by default, as I have hooked it
:
: Anyway, I think its possible other users might be in this same
: situation, with slow performance, and not even realizing it yet...
: Obviously they can fix this if they go and add LengthFilter, but
: should we be doing something different?
On one level, ithink a big improvement might just be
: why not just discard them completely in say, indexer/queryparser ?
In QueryParser: maybe, that's a high level API with assumptions about
"human" interaction and text.
In the IndexWriter: it seems like a bad idea.
Low level Lucene really shouldn't be making any assumptions about *how*
the cl
: LUCENE-2746: add TM to some of the logos, don't have an editor for the others
:
: Modified:
:
lucene/dev/trunk/solr/src/site/src/documentation/content/xdocs/images/solr.jpg
:
lucene/dev/trunk/solr/src/site/src/documentation/content/xdocs/images/solr_FC.eps
grant: I'm not sure how you
: I'm running 1.4.1 on a Windows box. Trying to specify dismax via
: defType=dismax fails, returning 0 results and doesn't look like it hits the
: dismax handler at all, at least the parsed query comes back with +() +()
: with debugQuery=on.
that parsed query looks like it would have come from di
: docs: changes to non-released features don't need entries
Hmmm.. sorry, i missread CHANGES.txt and thought SOLR-1516 was in the 1.4
section.
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional c
I'm going to side step the "use jira to generate changes.txt debate, and
focus on what i think is the broader problem with a simpler fix.
FWIW, i like CHANGES.txt myself, i think the jira generate pages
compliment it, and give you a differnet view of the same info, but i
prefer CHANGES.txt bec
: I'm playing with dismax and the mm parameter. Actually, configuring it in
: solrconfig.xml. Starting simply, I just put 2 in, then I auto-formatted the
: xml now it looks like:
:
: 2
:
:
: When executing this, I get a number format exception at
: o.a.solr.util.SolrPluginUtils.calculateMinS
: importantly if i wanna run a single testcase by
: -Dtestcase=org.apache.solr.request.TestFaceting.testFacets clean test
I think that should be "-Dtestcase=TestFaceting -Dtestmethod=testFacets"
: I am not even sure if the test is really running .. also why cant I just run
: the test directly b
: NOTE: reproduce with: ant test -Dtestcase=DistributedClusteringComponentTest
: -Dtestmethod=testDistribSearch
: -Dtests.seed=4959909076277587079:-8952133138041211916 -Dtests.multiplier=3
:
: But I couldn't reproduce it on my mac.
It's failing consistently on both the trunk and 3x hudson jobs,
+1
(FWIW: i didn't notice until now that those IDE targets get added, might
be nice to standardize on a naming convention... "ant setup-ide-idea",
"ant setup-ide-ecplise", etc...)
: Date: Tue, 4 Jan 2011 08:05:22 -0500
: From: Robert Muir
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apa
: But i think we can safely use the simplest target names: 'idea' or 'eclipse'
:
: At some point if someone makes an IDE called 'build', 'test', or
: 'clean', we can append this-target-sets-up-* to their badly named IDE.
*shurg* ... i'm less worried about that then i am about people who have
ne
: -0 on the setup-ide-* standardization - I like the shorter form. If
: there were 19 different supported IDEs, I would agree. But with just
: two, what's the gain?
i wasn't trying to argue for any gain -- just standaridization. i wasn't
overly worried baout what hte standardization looked
: > + public static final Set EMPTY_STRING_SET =
Collections.emptySet();
: > +
:
: I don't know about this commit... i see a lot of EMPTY set's and maps
: defined statically here.
...
: I think we should be using the Collection methods, for example on your
: first file:
Hmmm... i am us
ApacheCon Europe will be happening 5-8 November 2012 in Sinsheim, Germany
at the Rhein-Neckar-Arena. Early bird tickets go on sale this Monday, 6
August.
http://www.apachecon.eu/
The Lucene/Solr track is shaping up to be quite impressive this year, so
make your plans to attend an
: Anyone who *wants* a phrase query can ask for one with double quotes.
: If you force this option on, users have no way to turn it off.
:
: I'm strongly opposed. I could care less about english.
Hold on a minute and think about what jack is pointing out here.
I can understand your argument aga
: Can you honestly generalize this rule from "how to handle hyphen" to
: "if > 1 term comes out of a whitespace-separated term, it must be a
: phrase query?".
No, which is why i never said that. what i said was "Hold on a minute and
think about what jack is pointing out here" -- instead of dism
: What many of us not familiar with the tokenizing rules of the standard
: tokenizer just realized is that it's not a good default for english
: and probably most other european languages.
Jira is down for reindexing at the moment, so i can't file this suggestion
as a new Feature proposal (or co
: > http://unicode.org/reports/tr29/#Word_Boundaries
: >
: > ...I think it would be a good idea to add some new customization options
: > to StandardTokenizer (and StandardTokenizerFactory) to "tailor" the
: > behavior based on the various "tailored improvement" notes...
: Use a CharFilt
: I am wondering what the expected initialization sequence of the analysis
: factories are with respect to init() and ResourceLoaderAware.inform().
: At least judging from some tests, it seems that inform() is called
: afterwards. I was expecting the other way around, so that init() can do
: But it really depends on how you want your whole analysis process to
: work. e.g. in the above example if you want to treat "foo-bar" as
: really equivalent to foobar, or you want to treat U.S.A as equivalent
Unless i'm missreading the Word Boundary doc, the point of these types of
tailorings
:I have a patch for SOLR-1093. It isnt a complete solution but
: close. The sub queries are run serially as Lance
...
: I havent heard back yet. Following is the information about the patch.
: Please do let me know if you have any concerns. I will wait for a week and
: if I do
Fuck...
mistake #1: thinking that the centralized licenses stuff ment i didn't
need to worry about sha1 & licenses files for jars already in use in
lucene - forgot there are two dirs for this.
mistake #2: forgetting to run "ant validate"
...working on it.
: Date: Wed, 15 Aug 2012 19:52:43
: Sorry, Solr is correct. When directly passed to ResourceLoader the path
: is correct. The problem here is the way how ClasspathResourceLoader
: handles this. It uses Class.getResource() to load and thats wrong,
: because that one expects a "/" to be absolute. We have to fix this and
: maybe
: Because people impl the default algorithm for general purposes. Those
: tailorings are not 'mandatory'.
I didn't say they were mandatory, I said it seems like it would be a good
idea to add options for them.
The spec says: "... implementations may override (tailor) the results to
meet the re
: 1) Is running at least one core required or is the message above referring
: to some admin console functionality that wont work without at least one
: core? If running at least one core is required, perhaps this needs also
: to go in the Release notes/Changes.
having at least one core (and ha
: This is cool. I'd say lets get it up and going on jenkins (even weekly
: or something). why worry about the imperfections in any of these
: coverage tools, whats way more important is when the results find
: situations where you thought you were testing something, but really
+1.
Even if it ham
: I didn't say that. I said the opposite - that having imperfect tests
: (or rather tests that cannot be fixed for whatever reason) discourages
: from looking at test failures and makes one just unsubscribe from the
: jenkins mails. If this is the case then yes, I think not having a test
: like th
: Off of the top of my head, the only thing I can figure is that the
: maven based tests are somehow getting the wrong schema sometimes.
: Maybe if there's some different with how solr homes are set between
: ant and maven?
that should be easy to sanity check right? add something like this into
On the solr-user list, Dirk Högemann recently mentioned a problem he was
seeing when he tried upgrading his existing solr setup from 3.x to
4.0-BETA. Specifically this exception getting logged...
http://find.searchhub.org/document/cdb30099bfea30c6
auto commit error...:java.lang.UnsupportedO
: Thanks Jack that was helpful!
: So in order to use uuid as uniqueKey update processor chain is the way to go.
There are two ways to do it.
correct.
:
: uniqueKey
: NEW
:
...that approach won't work, it still relies on the UUIDField class
accepting "NEW" as input to
: I tested this approach (At revision 1379678) and it seems working. I can
: see generated values. e.g. a259aa91-353f-4824-9f68-01837b721cf7
Hmmm... on a single node instance it might work -- but i'm pretty sure
it's just "tricking" the processing chain into thinking the uniqueKey for
the docu
: A second part here is the SignatureUpdateProcessor. It is a similar
: item in that it can update 'id'. Are there any gotcha's with it?
it's totally safe for update processors to assign the uniqueKey (that's
why the UUIDUpdateProcessor was written to replace the 'NEW' meme) as long
as you don'
Ouch.
Can't our new SecurityManager block any code from calling System.exit?
(doesn't help users, but would have at least help us discover this in
tests right?)
: Date: Mon, 3 Sep 2012 13:03:13 +0200
: From: Dawid Weiss
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject:
: The problem is that you have an older jUnit version in your
: $ANT_HOME/lib or ~/.ant/lib classpath. Please use a freshly and clean
: installed ANT version (not one from your Linux distribution, those are
: often cluttered with outdated stuff). Lucene needs no such libraries
: like JUnit in
: Yeah, now it does -- Uwe took care of it.
yeah ... i didn't see it suggested in this thread, but then later saw they
new jira/commit. awesome.
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For addition
: the real problem is the solr/lib which is "shared" by solr and solrj:
: https://issues.apache.org/jira/browse/SOLR-3686
I suspect the only reason it was ever setup that way, was to
prevent having duplicate copies of jars in svn and when we ship releases.
svn & src releasese are no longer an
: The wiki doc for updateLog says that this feature is “not enabled by default”
: when in fact it is enabled by default in the example solrconfig.xml for Solr
: 4.0-BETA.
:
: The question is whether the example is wrong or the doc is wrong. All I know
: is that they do not match.
The wiki is cor
folks who are versed in the details of that javadocs-lint helper script
may want to note that it did not fail on this (pre-commit) bad html (which
i thought was one of the things the javadocs-lint was suppose to be
looking for -- but if i'm missunderstanding, then feel free to ignore)
: Date
Is there any reason why SlowFuzzyQuery shouldn't be in the class level
javadocs for FuzzyQuery ?
: Hi Francisco: The core FuzzyQuery does not support edit distances > 2,
: because the automatons used for this would be too big and slow. If you
: really want distances > 2, use
:
http://lucene.
: > Is there any reason why SlowFuzzyQuery shouldn't be in the class level
: > javadocs for FuzzyQuery ?
: >
:
: I already answered this today: I'm strongly against having this
: unscalable garbage in lucene's core.
That doesn't answe my question about mentioning SlowFuzzyQuery in the
javadocs
: I'm not really sure thats a reflection on this release candidate: I'm
: not sure the smoke tester works on windows. I think you may have to
: test manually.
FWIW: smokeTestRelease.py says...
# This tool expects to find /lucene and /solr off the base URL. You
# must have a working gpg, tar, un
: Artifacts are here: http://s.apache.org/lusolr40rc0
For the record: My vote for RC0 is -1.
In my opinion, SOLR-3875, SOLR-3879, and LUCENE-4430 seem serious enough
to warrant a respin (and if a few of the other recently fixed bugs can
make it in even better).
I don't want to speak on beha
Wait a minute ... this fixVersion update caught my eye.
the 4.0-BETA release highlights said...
* Improved Solrj client performance with Solr Cloud: updates are
only sent to leaders by default.
...and i just merged that into the 4.0 final release highlights -- but
based on this issue des
: Logos are important to recognize as trademarks as well.
: For the project's official logo (if it has one, and
: especially if it uses the ASF feather), ensure that it
: includes a small "TM" symbol in the graphic or
: immediately adjacent to it. For pages that inclu
https://issues.apache.org/jira/browse/INFRA-5327
This may be a problem when it comes time to announce/publicise 4.0 final
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h
: Are you sure about this? did you force a rebuild?
:
: If you edit templates, you often need to do this. Ill try this really
: quick and see if it solves your problem (I suspect it will).
To close the loop: rob's force rebuild didn't help the problem, but in
looking at it with him we discovere
https://issues.apache.org/jira/browse/SOLR-3904
If someone who knows more then me about the build/smoke checks can attach
a patch with the neccessary bits to make the build fail when solr packages
don't have package level docs, i'll work towards getting the build to pass
with that patch.
: Fi
: artifacts are here: http://s.apache.org/lusolr40rc1
VOTE: +1 for the following artifacts (sha1)...
378a1077b9b8806e8a972e8b56722696c07c1078 *apache-solr-4.0.0-src.tgz
787c41de747053bd22dfbb24ebbd1a6184038eef *apache-solr-4.0.0.tgz
5c7d5143dbe02b430a5dd0243385f55b928f1f40 *apache-solr-4.0.0.zip
: http://lucene.apache.org/solr/tutorial.html
:
: However I think I found a bug: in order to delete a document specified by an
: id, the following command must be used:
fixed, thanks for reporting
(FYI: i already verified that the javadoc versions on 3x and trunk
don't have this problem so thi
: Subject: Re: Welcome Stefan Matheis
Stefan: I just noticed you aren't currently listed on the website...
http://lucene.apache.org/whoweare.html
I think Ryan forgot to mention that traditionally new committers add
themselves to that list (it's typically your first commit to verify that
SVN a
(some history for those who aren't away)
Something we've done on the Solr wiki since day one was have a wiki page
dedicated to each of the minor releases (with sub sections for any patch
releases) that lists info about the develompent of any releases that
haven't happened yet, and important n
: What is the purpose beyond release notes? The ReleaseNotes36 type
: pages have a well-defined purpose, thats the exact release note we
: will send out.
: I think they are useful because it prevents the release manager from
: having to do that work (other people can populate them with a summary
:
: I don't understand the difference. If you are saying rename
: ReleaseNote36 to Release36, then thats fine!
the difference is we explicitly link to the URL of that page in the
CHANGES.txt and README.txt with the verbage i suggested...
>> More information about this release, including any errat
: I don't think we should really set ourselves up for failure. Why can't
: we document the features in the release up-front and put time into
: trying to make it readable and concise, its going to be put on the
: website as well as sent via email to a ton of people, and maybe
: copy-pasted in blog
Since there has been interest expressed in the last few weeks in ramping
up to a 3.6 release, I suggest we adopt a similar process to the one we've
used in the past, and start pruning the "Fix Version: 3.6" from issues
that seem like they aren't on anyones radar, or don't seem crucial enough
: Assignee: Tommaso Teofili (was: Tomás Fernández Löbbe)
:
: I think it was assigned to the wrong person. Assigning it to Tommaso
Doh!... so sorry about that... got the wrong name in the pulldown and then
evidently typed the wrong name in the comment as a result.
-Hoss
-
: I volunteer to do the first bulk "fix!=3.6" pruning against "The Query" in a
: few days (wed or thurs) if no one objects in the meantime.
: "The Query"
: (unresolved, fix=3.6, no assignee, updatedDate < march & not bug)
:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&jql
: OK, I need some basic tutoring here. What constitutes a _real_ failure?
: And is there a simple way to find them?
Assuming you don't customize the junit test output writer you can do
something like...
find -name TEST\*xml | xargs grep -L 'errors="0" failures="0"'
...which will give you all t
: Unless I hear objections, i think that ~48 hours from now we should bite
: the bullet and prune any unassigned issue (regardless of type) that hasn't
: been updated since March 19 (ie: actively being discussed
: this week)...
...
: ...that will get us to the point where almost all of
I think there must be something wonky with the javadoc "classpath" (or
whatever it's called in javadoc) on trunk when using the java 6 javadoc.
I'm seeing solr/contrib/uima complain a lot about packages/files not
existing when using "ant javadoc" (either at the top level or just in
solr).
i
ild just the contrib.
:
: -
: Uwe Schindler
: H.-H.-Meier-Allee 63, D-28213 Bremen
: http://www.thetaphi.de
: eMail: u...@thetaphi.de
:
:
: > -Original Message-
: > From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
: > Sent: Thursday, March 22, 2012 8:09 PM
: > T
: LUCENE-3847: ignore user.timezone because it is set by java logging system
and this is hard to predict.
crazy thought: what if instead of ignoring it in these checks, we force
set it to UTC? either in ant or in the test runner .. whichever makes
more sense. (that way, when interpreting test
: We already don't have to worry about this: locale and timezone are set
: statically for the whole test class.
:
: But by randomizing it, we also don't have to worry about improper use
right, of course ... i forgot we were explicitly randomizing it -- but
that begs the question: why then ddoes
: This definitely should be cleaned up and I would love to break down
: the existing legacy hook methods into either rules or at least cleaner
: methods, but I'd rather do it after I land LUCENE-3808. I realize this
: sentence has become my usual defensive line recently.
that is not a defensive l
: Unless I hear objections, i think that ~48 hours from now we should bite
: the bullet and prune any unassigned issue (regardless of type) that hasn't
: been updated since March 19 (ie: actively being discussed
: this week)...
:
:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?res
: http://www.apache.org/dev/svn-eol-style.txt
There is at least one contradiction i know listed there compared to what
we have done in the past...
https://wiki.apache.org/solr/CommitterInfo?highlight=%28eol\-style%29#Suggested_Subversion_Configuration_Options
# use LF for shell scrips since ev
: I've just modified ivy.xml to use jackson 1.7.4 and this triggered an
: interesting situation in which I ended up having two versions of
: jackson in my checkout. This begs the question of what should we be
: doing to remove stale JARs on an update of ivy descriptors. Should
: "ant clean" remove
: By adding to the validate section
: of build.xml, I got it to print out the java classpath, which includes
: the jar downloaded by the ivy-bootstrap option:
:
: [echo]
:
/usr/share/java/ant.jar:/usr/share/java/ant-launcher.jar:/usr/share/java/jaxp_parser_impl.jar:/usr/share/java/xml-c
: Whenever I try to run tests for 3.x I am getting problems with the jetty
: jars for the solr example. Before the checksums were added I was
: getting an error reading the jar. Now I get a bad checksum error.
sounds like it was corrupted when downloading?
try "ant clean-jars" and if that do
: Please vote to release these artifacts: http://s.apache.org/lusolr36rc0
+1
I encountered a few very nit-picky problems, mostly related to
Solr->Lucene javadocs linkage -- but as long as we upload the lucene
javadocs to where the the solr releases are linking when the release is
official, th
: > * "ant compile" in lucene src artifacts builds jars for some contribs not
: where did you get this target name 'compile' from? Is it listed in any
: of our documentation that this target should even work?
: its not a 'public target' listed in ant -p
Heh... sorry. pure muscle memory fro
: > Ignore "compile" ... "ant test" has the same result -- jars are built for
: > some contribs, but not all -- which is kind of confusing when you run all
: > the tests, and then go look for the demo but it's not there.
: >
:
: ant test really shouldnt build any jars at all :)
that's my point -
+1
SHA1's of the Artifacts inspected...
fac4f6d6b2fb742c830f468b8d8847f8da440b8f *lucene-3.6.0-src.tgz
88b3380cdb4d9bd0b0a082be23831143bba1acce *lucene-3.6.0.tgz
7d38276a13a5e6a5ae49b7c5514f22b5f185082b *lucene-3.6.0.zip
d4b95804603d4dfb5aa70def78a6744a07e50964 *apache-solr-3.6.0-src.tgz
558cdf1
: I've noticed our jenkins nightly build pretty much always fails
: recently (only 2 successes on the whole page)
: https://builds.apache.org/job/Solr-trunk/
most of the recent full nightly build failures are because
TestDistributedSearch.testDistribSearch fails in that build, (but not in
the t
1 - 100 of 1146 matches
Mail list logo