: This definitely should be cleaned up and I would love to break down
: the existing legacy hook methods into either rules or at least cleaner
: methods, but I'd rather do it after I land LUCENE-3808. I realize this
: sentence has become my usual defensive line recently.
that is not a defensive
: I volunteer to do the first bulk fix!=3.6 pruning against The Query in a
: few days (wed or thurs) if no one objects in the meantime.
: The Query
: (unresolved, fix=3.6, no assignee, updatedDate march not bug)
:
: OK, I need some basic tutoring here. What constitutes a _real_ failure?
: And is there a simple way to find them?
Assuming you don't customize the junit test output writer you can do
something like...
find -name TEST\*xml | xargs grep -L 'errors=0 failures=0'
...which will give you all the
: Unless I hear objections, i think that ~48 hours from now we should bite
: the bullet and prune any unassigned issue (regardless of type) that hasn't
: been updated since March 19 (ie: actively being discussed
: this week)...
...
: ...that will get us to the point where almost all of
: Assignee: Tommaso Teofili (was: Tomás Fernández Löbbe)
:
: I think it was assigned to the wrong person. Assigning it to Tommaso
Doh!... so sorry about that... got the wrong name in the pulldown and then
evidently typed the wrong name in the comment as a result.
-Hoss
(some history for those who aren't away)
Something we've done on the Solr wiki since day one was have a wiki page
dedicated to each of the minor releases (with sub sections for any patch
releases) that lists info about the develompent of any releases that
haven't happened yet, and important
: What is the purpose beyond release notes? The ReleaseNotes36 type
: pages have a well-defined purpose, thats the exact release note we
: will send out.
: I think they are useful because it prevents the release manager from
: having to do that work (other people can populate them with a summary
: I don't understand the difference. If you are saying rename
: ReleaseNote36 to Release36, then thats fine!
the difference is we explicitly link to the URL of that page in the
CHANGES.txt and README.txt with the verbage i suggested...
More information about this release, including any errata
: I don't think we should really set ourselves up for failure. Why can't
: we document the features in the release up-front and put time into
: trying to make it readable and concise, its going to be put on the
: website as well as sent via email to a ton of people, and maybe
: copy-pasted in
: http://lucene.apache.org/solr/tutorial.html
:
: However I think I found a bug: in order to delete a document specified by an
: id, the following command must be used:
fixed, thanks for reporting
(FYI: i already verified that the javadoc versions on 3x and trunk
don't have this problem so
: Subject: Re: Welcome Stefan Matheis
Stefan: I just noticed you aren't currently listed on the website...
http://lucene.apache.org/whoweare.html
I think Ryan forgot to mention that traditionally new committers add
themselves to that list (it's typically your first commit to verify that
SVN
: I was thinking that in order to actually get people to use and test
: these things, we should try to make them more than just nightly
: builds.
agreed ... your bug criteria below makes sense, and i think it would be
help promote testing adoption if we treated the alpha/beta release(s) as
: I agree with that it might be better to have no affects version. But
: having an affects version allows us to mark issues only relevant to the
: 4.x branch. Otherwise I cannot sort for issues that only affect 4.x, but
: have not yet a fix version.
:
: For me it is easy to remove all Affects
: I wanted to report a that the link for Nightly Build Documentation is
: broken on the following page:
: http://lucene.apache.org/core/developer.html
already fixed, thanks.
: Is there someplace where I can view the 4.0 Javadoc? (btw, any update on
: the status of an alpha/beta release?)
on
: Build: https://builds.apache.org/job/Lucene-trunk/1851/
:
: No tests ran.
Looks like same problem as the 3x branch: exec of svnversion is failing,
which makes me suspicious that it isn't in the path.
-Hoss
-
To
: OK, so how about this for Solr documentation on the website:
: pseudo-versioned live docs.
:
: The docs for 4x live under
: solr/4 or solr/doc/4
:
: These docs wouldn't be strictly versioned... we would continue
: updating the docs as needed after a release.
...
: A different question
: What shall be the procedure of peer review of CMS website changes? Is it
: examine patches in JIRA just like source code is, or is it to commit changes
: and observe them on the staging server http://lucene.staging.apache.org/ ?
: (or perhaps it depends)
it depends
the biggest value of hte
: The mailing lists are not mentioned on the Solr wiki page:
: http://lucene.apache.org/solr/
:
: The old apache sites had 'developer resources' with issue tracking,
: source, mailing lists. The lists should be under 'Resources' on the
: main page.
a) that's not hte wiki, that's the website.
: 2 ...solr/webapp/web/js/jquery.sparkline.js and a couple of others
: already had a license notification in them. I added in the Apache
: license information, should I have?
IANAL but I'm pretty sure you should *not* replace/add an ASL header to
any file that already has some other license
: I think currently Grant may be the only one in the know on this...
:
: I almost think i remember reading about a web interface for it?
if you load the CMS bookmarklet for any page on lucene.apache.org, there
will be a Publish Site link...
https://cms.apache.org/lucene/publish
FWIW: i have almost no opinion at all on what wiki software we use, but
that really just seems like the seed of this conversation...
: Specific to Solr, I think we should drop all the back compat in our
: documentation and target it toward 4.0
In an ideal world, i think the best way to
: can anyone confirm?
everything you said sounded right to me ... scale(...) as a general
purpose function probably can't be optimized very heaivly. but it would
probably be possible to write an optimization specific to
scale(fieldname).
I seem to recall someone saying something at some
: Can we improve this? Both Min and MaxFieldValueUpdateProcessorFactory
: show up as a compile error in eclipse, which is frustrating to people
: who use those IDEs.
really what is the compile error?
: While it could be a bug in the eclipse compiler, this code is
: definitely on shaky
: really what is the compile error?
:
: Type mismatch: cannot convert from ListComparable to CollectionObject
FYI: mruir and i syned up on IRC, this is fixed in r1242534.
-Hoss
-
To unsubscribe, e-mail:
: I get 404 in both cases (with and without trailing slash). There are also
links with /java/index.html
this should be fixed .. the redirect to deal with the versioned docs broke
the simple case of just /java
-Hoss
-
To
WTF?
I've kind of been tuning out the buildbot commits since they all looked
like they were triggered by people publishing changes after commiting
them to the staging setup, but this one doesn't seem to be related ot
anyone actually editing the site.
this looks like some sort of automated
: DOAP has moved - please fix files.xml
looks like the lucene-core doap file was being refrneced via the public
website, and that URL no longer exists. (the solr doap file was being
reffrenced via svn, and still exists even though it's in the site
directory which we will probably want to
: 1) moving both doap files into the CMS
done -- but aparently the lucene doap wasn't in SVN anymore since the
trunk merge? ... anyway, i couldn't find a it anywhere i would have
expected it to be given the old site layout, so i created a new one based
on the solr one and the details i
: The news as the front page for the lucene / solr pages seems a little
: less than ideal as our initial impression to new viewers.
:
: I think we should try and put something more compelling.
:
: As a start, I propose this:
:
: How about changing the features tab to News and moving the
: We are now live!
:
: http://lucene.apache.org/
FYI: Grant had put in a redirect for dealing with the /java/ - /core/
directory change, but forgot that a bunch of page names and urls had
changed slightly as well.
as you have probably noticed based on all the commits, i scrambled to try
and
: I have made a hack to fix this issue to add Array support. I would
: like to submit that as a patch if no one else is working on fixing
: this issue.
Harshad: patches are always welcome. If you haven't seen it yet, there is
a wiki with details on how to contribute new patches...
: Working on it. Note, Markdown allows for the regular use of HTML:
: http://daringfireball.net/projects/markdown/syntax#html
right ... hence my question of how we *want* to fix it, given that we have
a lot more options for doing things in markdown then we had with forrest
... we almost need
this is the wrong jira number, so the jira-commit linkage is going to be
off...
: SOLR-3062: implement openSearcher=false, make commitWithin soft, refactor
commit param parsing
...
: +* SOLR-3069: Ability to add openSearcher=false to not open a searcher when
doing
: + a hard
: The only reason for the p* field types is backward compat with older configs.
: But from 3.5 the Trie fields can fully replace p* fields.
:
: I suggest that for 3.6 we deprecate solr.IntField w/friends in Java and add a
deprecation warning to example schemas.
: For 4.x we can remove them
: : I suggest that for 3.6 we deprecate solr.IntField w/friends in Java and add
a deprecation warning to example schemas.
: : For 4.x we can remove them completely both from code and schema. Less is
more...
: -0 to removing the classes completley
I guess i should have said...
-0 to removing
: So, I'd suggest everyone give it a pass through, fix items as they see
: them and/or otherwise pitch in and help, because I'm tired of our
: current site and how poor it makes us look.
Grant: my one concern is how the Solr tutorial currently looks,
particularly related to inline code and
Reading up a bit on markdown, and poking arround the generated html and
the css we are using i *think* (assuming i understand everything
correctly) we have two problems...
: places where there should be inlined 'code' in a fixed width
: font, ie...
...this looks like CSS problem. Our CSS is
: inline code quoting. we should probably change most of the code css to
: use the pre code since that's what the markdown docs i'm looking at
: say markdown generates for code blocks.
...
: ...that looks like a content problem. we seem to be using source as
: the markup for our code
: As a bonus, SolrConfig.severeErrors is gone as is all the stuff around
: CoreContainer.abortOnConfigurationError.
(Erick){2}son just earned himself fucking sainthood in my book.
-Hoss
-
To unsubscribe, e-mail:
: Excuse me. It's in our Eclipse templates
no need to apologize, i just wanted to let you know why i made the change.
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail:
: This only affected trunk. The problem was that I managed to apply the
: patch multiple times after I ran the tests before checking in the
: first time. I do not understand why patch will append content if you
: apply the same patch repeatedly, but there it is.
it's because the patch command
I'll defer to the collective wisdon of folks like rmuir simon uwe
mccandless about when the major new APIs features like codecs
docvalues are stable and vetted enough to consider releasing, and i'll
defer to miller yonik about when the solr cloud update stuff is ready;
but i think we'd
: Yeah, that would be one thing -- different classpaths/ vm properties
: etc. This could be problematic.
:
: The Python runner completely cheats here, which is bad (because we may
: pick up a dep we didn't intend to, and never catch it)... just takes
: the union of all CLASSPATHS.
as long as
: This works fine for a SearchComponent, but if I try this for a QParserPlugin
I get the following:
:
: [junit] org.apache.solr.common.SolrException: Invalid 'Aware'
: object: org.apache.solr.mcf.ManifoldCFQParserPlugin@18941f7 --
: org.apache.solr.util.plugin.SolrCoreAware must be an
take a look at the CloseHook API and SolrCore.addCloseHook(...)
: Is there a preferred time/manner for a Solr component (e.g. a
: SearchComponent) to clean up resources that have been allocated during
: the time of its existence, other than via a finalizer? There seems to
: be nothing for
Grant: at quick skim suggests that your change to
solr/core/src/test-files/solr/conf/solrconfig.xml in r1214937 broke
several tests that use that config w/o a uniqueKey field.
At first glance, it's not even clear why you added that to the (heavily
overused) solrconfig.xml anyway, since there
: I'm consistently getting the error below when running tests, updated
: checkout of Solr 3x, no changes to the code.
:
: Note, in my case, it isn't necessary to specify the seed at all, ant
: test -Dtestcase=TestSolrEntityProcessorUnit fails all by itself.
i don't see a failure, but skimming
: if (ftype==null || !(ftype instanceof FloatField)) {
: throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
: Only float (FloatField) is currently supported as external field
: type. got ' + ftypeS + ');
...
: now that Trie fields support sortMissingFirst/Last,
: valType is NOT optional at all, at least in the 3x code line.
: You get errors like this on startup if you leave it out:
:
: Dec 14, 2011 2:07:48 PM org.apache.solr.common.SolrException log
: SEVERE: org.apache.solr.common.SolrException: Missing parameter
: 'valType' for
: But that's just it. There's no way for the EFF to point to any underlying type
: at all! Still, I can easily be persuaded that it'd be a bad thing to
I'm talking about configuration.
* this should be valid, because it has been valid in the past and we don't
want to break existing configs
:
http://localhost:8084/solr35/myapp/select/?wt=xmlstart=0rows=13q.alt=my+querysort=sum(mul(score,3),mul(num_visits,2))%20desc
:
: In this way we will be able to add little modifications to our score
a) there is a difference between modifying the scores of queries based on
functions, and
I never really noticed until today that NumericField and
NumericTokenStream don't support Short or Byte. (even though things like
FieldCache do).
I'm wondering if there is a particular reason why they were *not* to
included, or is it just that with numeric values in that small of a range,
: I'm wondering if the import for
: org.eclipse.jdt.core.dom.ThisExpression in SolrCore.java introduced in
: r1196797 (SOLR-2861) was a mistake. It adds an additional .jar
: dependency and doesn't seem to be used.
I asked yonik about this on IRC last week when i saw your email and he
said
: This is why I agree with your fix (versus adding a regular expression
: to ignore the warning in addition to Uwe's java 7 hack)... its just
: about whether it should be a blocker to release or not.
:
: Because chances are within the 72 hour vote period the link starts
: working again... if
: As far as I remember you cannot vote against...
bull. fucking. shit.
Release VOTEs can not be VETOed, but people are free to vote as they see
fit...
https://www.apache.org/foundation/voting.html
I cast my vote, and stated my opinions about respining. Discussion
ensued. If you
: Hi all, I have need of the functionality proposed in SOLR-1351, and I would
: like the chance to dip my toes in the water to implement this
:
: what would be needed to resurrect this patch ?
Hey Greg, thanks for your interest in contributing.
Rather then reply with comments here in email,
: Sorry for being @UweSays!
: Uwe
Whoa that's a little too meta even for me ... it's turtles all the
way down!
-Hoss
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail:
:
http://people.apache.org/~simonw/staging_area/lucene-solr-3.5.0-RC2-rev1204988/
+0
Two things jumped out at me as being bad, but i'm not certain if either
warrant a re-spin...
1) in the src builds, attempts at building javadocs result in an ant
failure because of a warning that it
: Lucene: The last time this was done was with the 2.1.0 release. From
...
: Solr: Prior to the Lucene-Solr development merge, Solr always did the
...
Cool ... so I just haven't been paying attention and this is expected.
thanks.
-Hoss
: Agreed -- we shouldn't / can't ship a package-list from Oracle, but we
: should change the hardcoded javadoc.link value in common-build.xml to
: (ie: the empty string) ... that causes the build to succeed completely --
: if users want happy/shiny/pretty links to java.lang.* classes then
: If the fact that javadocs fails because Oracle removed the
: package-list for java5 javadocs is a blocker,
:
: then on RC3 should i vote -1 because the tests often completely hang
: on java5 due to JVM concurrency bugs?
a) API Compatibility has nothing to do with which JVM you choose to try
: But I'm just pointing out that with bugs like the javadocs thing,
: its not really a lucene bug. Likely its a transient issue with
: oracle's configuration... does that mean all of our past releases are
: broken too Because, ant javadocs won't work there either.
Those releases are already
: I would love to know when it is okay to return a ConstantScoreQuery
: wrapping my Filter so that I needn't bother with my ValueSource. In my
: opinion, FieldType should have a getFieldFilter() method similar to
: getFieldQuery(). Perhaps a hint of some kind could be added to the
: QParser
Completley removing all of this info seems like more harm then good -- it
actually advises against doing an optimize except when you know you're
never going to modify your index, and it explains the downsides of
optimizing.
i would suggest we add most of this back, but perhaps change the
: First of all, thanks for this good project. I would like to know if exists
: papers or documents related with theoretic model of response time of Apache
: Solr or Apache Lucene.
I don't really understand your question -- what would a theoretic model
of response time look like?
If you are
: The bug is that QueryParser tries to be a Tokenizer and breaks on whitespace.
: Allowing tokenizer access to the query string would just mean that
Calling this a bug in the QUeryParser is grossly missleading -- it's like
saying that QueryParser is buggy because it does parsing on whitespace
: its not really misleading. Its a bug.
...
: the queryparser's grammer/behavior is hardly set in stone. we can
: improve it, thats why the issue is open for anyone that figures out a
: good solution here.
it's working exactly as designed: whitespace delimites clauses.
a new parser (or
: URL: http://svn.apache.org/viewvc?rev=1189160view=rev
: Log:
: disable test
is there some context to this?
is the test broken? is the code being tested broken?
is there an open Jira issue reated to whatever hte problem is?
can we get a Jira number in that @Ignore message so people who
: I'm using SolrTestCaseJ4 to do some test for my program. I want to create
: core with different core name, shard name, collection name, data dir ect.
: But when I change the solr.xml core section. SolrTestCaseJ4 still create
: core with default. So can I let SolrTestCaseJ4 use solr.xml
: How often does a sync between the following two occur?
:
: https://github.com/apache/lucene-solr
: http://svn.apache.org/repos/asf/lucene/dev/trunk
I think you'll have to ask the github team that question, it's their
mirror.
This is all the difinitive info i know about Git mirrors @
: Subject: DUH2 getStatistics() ok?
eks: to close the loop, i read your message yesterday and asked miller
about it on IRC, and that lead him to commiting r1178632.
thank you for catching that.
https://svn.apache.org/viewvc?view=revisionrevision=1178632
-Hoss
: Do you have any plans to support function queries on score field? for
: example, sort=floor(product(score, 100)+0.5) desc?
You most certianly can conput function queries on the the score of a
query, but you have to be explicit about which query you want to use the
score of. You seem to
: I have queries with a big big amount of OR terms. The AND terms are much
: more convenient to handle because they can be turned into several filter
: queries and cached.
:
: Thinking in innovative solutions I recalled the De Morgan Laws
: http://en.wikipedia.org/wiki/De_Morgan's_laws of
: I'm just starting to get into solr development and I want to try writing a
: custom Scoring Class. I copied the DefaultSimilarity class and renamed it
...
: I then make sure my TestSimilarity is always used by editing
: conf/schema.xml to have this line:
: similarity
: Can someone explain the rationale behind killing the JVM in
: SimplePostTool.fatal(...)?
because it is a simple post tool
it is not intended nor recomended to be used in a larger context other
then as a simple post tool on the command line.
when it encounters an error, it exits.
-Hoss
: Then what is the recommended way of posting files to Solr?
where are you seeing such a recomendation?
SimplePostTool is provided only as a simple tool for posting the example
files and demonstrating the basics of doing an HTTP Post in java.
There are no docs that i know of that recomend
: This is 100% reproducible on my local machine (run from
solr/contrib/extraction/):
:
: ant test -Dtestcase=ExtractingRequestHandlerTest
-Dtestmethod=testCommitWithin
-Dtests.seed=-2b35f16e02bddd0d:5c36eb67e44fc16d:-54d0d485d6a45315
I reopend SOLR-2540, where this test was added.
Jan? are
: http://wiki.apache.org/nutch/NutchTutorial#A6._Integrate_Solr_with_Nutch) I
: should copy schema.xml of Nutch to conf directory of Solr.
: So I added all of my required Analyzer like *ICUNormalizer2FilterFactory *to
: this new shema.xml . Maybe all of my problems are deriving from this file!
:
https://people.apache.org/~hossman/#solr-user
Please Use solr-user@lucene Not dev@lucene
Your question is better suited for the solr-user@lucene mailing list ...
not the dev@lucene list. The dev list is for discussing development of
the internals of Solr and the Lucene Java library ... it is
: In trunk, you can return a function value -- but i'm not sure what the
: syntax for the 'fieldvalue' function would be. To use other
: functions, you just put it in the list:
: fl=name,id,ord(id)
it's field(name_of_field_as_quoted_string)
...but i'm not sure how the DocTransformer code
: OK, here's a first cut at a patch trying to fix some of these issues.
: AutoCommitTest.testSoftCommitMaxTime() is failing once in a while though...
not sure what's up yet.
many things in AutoCommitTest are abominations that should be
purged from the earth.
see SOLR-2565 and the
: GIGO is a valid response...
I don't think GIGO is a valid attidude for a parser who'se whole purpose
is to accept anything an end user might through at it and try to do it's
best
I agree with David: I think it's a bug that 0 length prefix/wildcard
queries are accepted by default with no
: This link appears dead (404 Not found error), Is there anything we
: should do about it?
:
: http://lucene.apache.org/solr/api/index.html
it looks like this was broken when Jan updated the website the other day
... he moved the old ./solr dir (that had a ./solr/api dir) out of the way
to
: Subject: trunk test failure (1314308641)
Weird stuff here with initRandom and MockSep codec...
[junit] Testsuite: org.apache.solr.update.DirectUpdateHandlerOptimizeTest
[junit] Testcase: org.apache.solr.update.DirectUpdateHandlerOptimizeTest:
FAILED
[junit] (null)
[junit]
: on Jenkins and also locally. I am not sure which test is the problem, but
: its incredible slow. Jenkins used to need 12 mins now 4 hrs for the whole
: test build!
: e.g. https://builds.apache.org/job/Lucene-Solr-tests-only-trunk/10201/
I'm not sure if you can trust these times, but according
FYI: If you look at the history of the builds, and ignore the ones that
were killed or failed early (if a core lucene test fails, it never tries
the solr tests) it looks like the first change that could have caused
long test times might be as old as #10179 (hard to tell)
: oops, well this one is definitely the slowest, but the other test
: methods add up to hours also :)
not within each test class... testUnicode does seem to be the crux of hte
problem in each of the really slow test classes (it's defined in a
subclass so it's run as part of all these classes
: ist he first slow one which was canceled for some reason.
:
: And the commit there is:
: http://svn.apache.org/viewvc?view=revisionrevision=1157425
:
: This is the cause of the problems which is in my opinion very serious.
...but how? why? ...
why is it only screwing this testUnicode
: I don't think its a crux of the problem? the non-testUnicode methods
: add up to ( 9 + 47 + 53 = 111 minutes)
I'm confused as to what you are looking at to get those numbers ... are
you summing all the times from differnet tests? because those are
typically running in diff threads, so i'm
: Slow here too: windows 7 64bit 1.6.0_27-server
If you an rmuir can reproduce this speed consistently, can you try locally
reverting the commit you suspect (SOLR-2565: r1157425) to see if that
fixes the problem for you?
And or post the Solr logs from some of these tests so we can see what it
: I'm quoting you here: SolrExampleJettyTest.testUnicode - 2 hr 50
: min of 3 hr 43 min total
:
: I did subtraction... so the other test methods in this single class
: are doing *something* for nearly an hour here.
i don't disagree with you ... i just didn't understand your 9 + 47 + 53
: Some investigations:
: It seems to be a deadlock or some other threading related issue. Sometimes
: the test passes with the commit also in 8 or 10 seconds, but most of the
: time it hangs here.
: With the commit reverted it passes always in constant time.
i would suggest reverting, reopening
: with cache warming in Solr I found that the order of the Query objects
: within the filters list makes a difference to the equals() and hashCode()
: methods.
...
: I found that it resulted in a cache miss for two queries that have the same
: results just because the filters had a
: Would it be possible to use a custom class or List implementation which
: generates a hash code in such a way that the order they are combined makes
: no difference? The change only really needs to effect whether a key is
: selected from the cache rather than altering the order of the filters.
: I opened SOLR-2691 to track and attached a patch.
:
: Would appreciate a quick look from a committer. Thanks!
I'm not too familiar with that code, but i can definitely reproduce the
bug ... i'll take a look at the existing tests and see if i can help out
with your patch.
-Hoss
: fix test to not create invalid unicode
I'm confused ... when/why does randomFixedByteLengthUnicodeString not
return valid unicode?
: - // this test needs the random data to be valid unicode
: - String s = _TestUtil.randomFixedByteLengthUnicodeString(random,
data.length);
-Hoss
: my field category (string) has omitNorms=True and
omitTermFreqAndPositions=True.
: i have indexed all docs but when i do a search like:
: http://xxx:xxx/solr/select/?q=category:AdebugQuery=on
: i see there's normalization and idf and tf. Why? i can't understand the
reason.
those options
: i don't think thats very important, its just our internal tests, and everyone
here is a committer.
Wasn't the new randomization added to the test-framework ?
we advertise that as a package users can use to write their own tests
using lucene -- if we had tests that depended on order other
: We just made it random always, so it will fail consistently regardless
: of JRE implementation.
Right, which means people who:
* have an app that uses Lucene
* have tests that use the Lucene test-framework
* upgrade lucene w/o changing their app or their JVM
...could now get failures.
:
: I was looking for MOD function in SOLR, but I couldn´t find it. Is there any
: solution thas isn´t directly in SOLR or can you implement this funciton (if
: you can, so when?)?
:
: It´s very important function for our project. For example we need to search
: after the five-year, decade, etc.
801 - 900 of 2506 matches
Mail list logo