What are the conditions that cause corruption? If there is just one
writer and multiple readers, is that safe?
The cases are well spelled out in Lucene in Action, section 2.9.
Generally, one writer and multiple readers is not safe for disabling
locking.
For example, the IndexReader, when
Can anybody suggest how to avoid this problem and concurrently access
in the index accroos the network at the same time maintaining the index.
Unfortunately, there are known issues with locking and NFS. The lock
files (and underlying locking protocol) do not work reliably when used
over
I would be grateful for some tip as this is my first approach to Lucene...
Is it your IndexSearcher instantiation that's raising the Lock obtain
time out exception?
Can you look in your java.io.tmpdir and see if there are any Lucene lock
files present even when Lucene is not running? If
I did a search on the Lucene list archives, found a lot of posts about
the use of Lucene with NFS and how there are locking issues, but don't
see anybody coming to a real solution to this.
We are trying to fix this. Many people seem to hit it.
The current plan is to first decouple the
I did a search on the Lucene list archives, found a lot of posts about
the use of Lucene with NFS and how there are locking issues, but don't
see anybody coming to a real solution to this.
We are trying to fix this. Many people seem to hit it.
The current plan is to first decouple the
If I use IndexReader and IndexWriter class for inserts/updates, then I need
to handle the threading issues myself. Is there any other class (even in
nightly build) that I can use without having to take care of synchronization.
All this means is your code must ensure only one writer
I am not very good at threading. So I was looking if there is any api class (even in nightly builds) on top of the IndexReader/IndexWriter that takes care of concurrency rules.
This is exactly why IndexModifier was created (so you wouldn't have to
worry about the details of closing/opening
When I close my application containing index writers the
lock files are left in the temp directory causing an Lock obtain
timed out error upon the next restart.
My guess is that you keep a writer open even though there is no activity
involving adding new documents. Unless I have a massive never
I met this problem: when searching, I add documents to index. Although I
instantiates a new IndexSearcher, I can't retrieve the newly added
documents. I have to close the program and enter the program, then it will
be ok.
Did you close your IndexWriter (so it flushes all changes to disk)
When the indexing process still running on a index and I try to search
something on this index I retrive this error message:
java.io.FileNotFoundException:
\\tradluxstmp01\JavaIndex\tra\index_EN\_2hea.fnm (The system cannot find
the file specified)
How can I solve this.
Could you provide
I think its a directory access synchronisation problem, I have also
posted about this before. The scenario can be like this ..
When Indexwriter object is created it reads the segment information from
the file segments which nothing but list of files with .cfs or mayn
more type, at teh same
Yes
Yes, you're certain you have the same lock dir for both modifier
search process?
Or, Yes you're using NFS as your lock dir?
Or, both?
Mike
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands,
Ok if I well understood I have to put the lock file at the same place in
my indexing process and searching process.
That's correct.
And, that place can't be an NFS mounted directory (until we fix locking
implementation...).
The two different processes will use this lock file to make sure
Yes, I use the nfs mount to share the index for other search instance
and all the instances have same lock directory configured, but the only
the difference is that nfs mount is read-only mount, so I have to
disable the lock mechanism for search instances, only lock is enabled
for index
For the index process I use IndexModifier class.
That happens when I try to search something into the index in the same
time that the index process still running.
the code for indexing:
System.setProperty(org.apache.lucene.lockDir, System
.getProperty(user.dir));
My application database can be updated outside the application also. Whenever
there is a change in database by some other source, I want to update my index.
Is there any way to do so?
I am using Java and the database is DB2. I saw the DB2 UDF. But I have to put the jar inside the
The Lucene code is crashing under circumstances that seem pretty lame.
At periodic intervals, lucene tries to File.renameTo(newfile).
Sometimes this fails, so Lucene implemented some fall-back code to
manually copy the contents of the file from old to new. Our problem is
that sometimes *this*
This is windows/jvm issue . Have a look at how ant is dealing with it, maybe we
could give it a try with something like that (I have not noticed ant having
problems).
Indeed it seems like Windows somehow believes the deletable file must
be still held open, given that the File.renameTo and
XP Proffesionall / win 2003 Server, we had this issue on JVMs 1.5/1.6.
It seams it this happens not so often on 1.6/Win2003, but we have this in
production only for 2 weeks.
We have single update machine that builds index in batch and replicates to many
Index readers, so at least customers
At this post Erik says:
Sure, you can subclass DefaultSimilarity and override and tweak just
the lengthNorm() method. Be sure to use IndexWriter.setSimilarity()
to get your custom one used.
Well, I traced my own method lengthNorm and realized that this method is not
being called.
The
How do I remove lucene locks (startup) if there are multiple applications
using lucene on the same box and all use the same lock dir?
The lock files are just files, so you can up and remove them.
However: this is in general dangerous and should not be necessary.
Lucene uses the lock files
Simon Willnauer wrote:
The index writer creates the lock in its constructor via the public
FSDirectory makeLock method.
regards simon
On 8/8/06, Leandro Saad [EMAIL PROTECTED] wrote:
I'm trying to use them, and I maybe be wrong, but I can't unlock the dir
before I create the Directory right?
I'm not sure if it would help my particular situation, but is there any way
to provide the option of specifying the compression level? The level used
by Lucene (level 9) is the maximum possible compression level. Ideally I
would like to be able to alter the compression level on the basis of
I'm not sure if it would help my particular situation, but is there
any way
to provide the option of specifying the compression level? The level
used
by Lucene (level 9) is the maximum possible compression level. Ideally I
would like to be able to alter the compression level on the basis of
I have a sample document which has about 4.5MB of text to be stored as
compressed data within the field, and the indexing of this document
seems to
take an inordinate amount of time (over 10 minutes!). When debugging I can
see that it's stuck on the deflate() calls of the Deflater used by
I am refactoring our search code that was written prior to 1.4.3. I am
using Lucene 2.0 now. The search string entered by users was actually
parsed by our custom code to generate the query. This code was getting
fairly big and messy and I'm changing the code to use Lucene's query
parsers
It was a little comforting to know that other people have
seen Windows Explorer refreshes crash java Lucene on Windows. We seem
to be running into a long list of file system issues with Lucene, and I
was wondering if other people had noticed these sort of things (and
hopefully any
I do appreciate the thoroughness and graciousness of your
responses, and I hope there's nothing in my frustration that you would
take personally. Googling around, I've found other references to the
sun jvm handling of the Windows file system to be, well, quixotic at
best.
No problem!
In my project,I want to update the lucene's index when there has database
insertion operations,in this way,my users could search the fresh information
immediately if someone inserted the information into database.That's what I
need,could someone give me suggestions to implement my
Jason Polites wrote:
I'm not sure about the solution in the referenced thread. It will work,
but
doesn't it run the risk of breaching the transaction isolation of the
database write?
The issue is when the index is notified of a database update. If it is
notified prior to the transaction
If I try to add documents to an index while a reader is open I get en error message
saying Cannot delete C:\myindex\_3n.f0. I suspect that this is due to the
fact that the windows fs won't allow deletion of a file when there is a filehandler
connected to it. The solution I have at the moment
Its nice if someone shares design documents of Lucene with Me.
You could start with the javadocs here:
http://lucene.apache.org/java/docs/api/index.html
Click on the Document class to see some decription for Documents in
particular.
Or for a broader get your feet wet introduction,
Chris Hostetter wrote:
: I added one record to the index and did flush(), optimize() and close() in
that order.
: I had one index file _twca.cfs. After the inserting the document and doing
optimization, I have two index files _twca.cfs and _twcf.cfs (both approx. same
size) and deletable
We are upgrading from Lucene 1.4.3 to 1.9.1, and have many customers
with large existing index files. In our testing we have reused large
indexes created in 1.4.3 in 1.9.1 without incident. We have looked
through the changelog and the code and can't see any reason there should
be any
Jason Polites wrote:
Are you also running searchers against this index? Are they re-init'ing
frequently or being opened and then held open?
No searches running in my initial test, although I can't be certain what is
happening under the Compass hood.
OK.
This looks similar to
While searching the net for 2.0 API examples, I noticed there aren't that
many. The only example I have seen is the stock example. Are there any
tutorials or example codes out there?
You can start with the getting started page which walks through the demo
code:
Erik Hatcher wrote:
Also let me also emphasize the test cases that are built into the Lucene
codebase itself. These are premium *always working* examples of how to
use specific parts of Lucene in an isolated fashion. Check out
Lucene's trunk (or 2.0 branch) via Subversion and enjoy.
Here
Yonik Seeley wrote:
On 8/27/06, Doron Cohen [EMAIL PROTECTED] wrote:
I plan to submit an update to that patch later today accommodating your
comments (and others); It will most probably retry for IOExceptions (not
analyzing the exception text); also, it would return false if the *retry*
for
Doron Cohen wrote:
Jason Polites [EMAIL PROTECTED] wrote on 27/08/2006 09:36:07:
I would have thought that simultaneous cross-JVM access to an index was
outside of scope of the core Lucene API (although it would be great), but
maybe the file system basis allows for this (?).
Lucene does
Jason Polites wrote:
Yeah.. I had a think about this, and I now remember why I originally
came to
the conclusion about cross-JVM access.
When I was adding documents to the index, and searching at the same time
(from a different JVM) I would get the occassional (but regular)
Jason Polites wrote:
It was definately NTFS, unfortunately it was a while ago, and most of the
code has changed.
Basically I had a multi-threaded app where multiple threads were writing to
the index (but exclusively... that is, I had my own locking mechanism
preventing concurrent writes).
In a
Bhavin Pandya wrote:
I am running lucene 1.9 on unix machine...updating my index very frequentlyafter few
updation it says read past eof
I know this exception generally comes when one of the index got corrupted...but
i dont know why it got corrupted ?
may be mine code problem but i am
Stanislav Jordanov wrote:
After all, the Lucene's CFS format is abstraction over the OS's native
FS and the App should not be trying to open a native FS file named *.fnm
when it is supposed to open the corresponding *.cfs file and manually
extract the *.fnm file from it.
Right?
Yes, good
Stanislav Jordanov wrote:
For a moment I wondered what exactly do you mean by compound file?
Then I read http://lucene.apache.org/java/docs/fileformats.html and got
the idea.
I do not have access to that specific machine that all this is happening
at.
It is a 80x86 machine running Win 2003
Bhavin Pandya wrote:
My guess is ...One of my index is got corrupted so whenever I am trying
to search the index or optimize the index or merging the multiple index
...It will throws same exception but from different class...sometime
from IndexReader or sometime from IndexWriter depends on how
You probably forgot to close an IndexWriter?
Well, I wish it were that easy...I open one IndexWriter to write the
documents to the index after it is created, and then call writer.optimize()
and writer.close(). Your suggestion is a good one in that, from what I've
read, the writer needs to be
Yes I am sure only one writer at a time accessing index.
no i am not getting any other exception.
and there is no problem of disk space also.
right now i have backcopy of indexes so whenever one index got corrupted
i m replacing with backup one and starting the indexer again from that
jacky wrote:
There is a question about delete operation, i have not found any doc in
lucene api's javadoc:
When using delete(Term term) of IndexReader and commit, at the same time,
an indexSearcher is open.So the deleted document still can be seached till
reopen the indexSearcher, i
Bhavin Pandya wrote:
It sounds like you're working with the index correctly, so I don't
have any other ideas on why you're getting CFS files that are
truncated. I would wory about the cp step filling up disk, but if
you're nowhere near filling up disk that's not the root cause here.
I
Hi all,
There is an issue opened on Lucene:
http://issues.apache.org/jira/browse/LUCENE-665
that I'd like to draw your attention to and summarize here because
recently users have hit it.
The gist of the issue is: on Windows, you sometimes see intermittant
Access Denied errors in renaming
Jason Polites wrote:
I've also seen FileNotFound exceptions when attempting a search on an index
while it's being updated, and the searcher is in a different JVM. This is
supposed to be supported, but on Windows seems to regularly fail (for me
anyway).
Note that this use case (accessing one
Mark Miller wrote:
I'll one up you:
http://www.manning.com/hatcher2/
Might as well save yourself a whole lot of time and just buy the book.
If you're going to use Lucene it might as well be required.
There is also Getting Started on the Lucene web site:
Van Nguyen wrote:
I only get this error when using the server version of jvm.dll with my
JBoss app server… but when I use the client version of jvm.dll, the same
index builds just fine.
This is an odd error. Which OS are you running on? And, what kind of
filesystem is the index directory
Bhavin Pandya wrote:
Before you open IndexWriter object you can check whether lock file
exists or not and if its available you can unlock it.
Use IndexReader.isLocked and IndexReader.unlock.
Also, you could use a try / finally and always close the IndexWriter in
the finally clause, which
Van Nguyen wrote:
I'm running this on Windows 2003 server (NTFS). The Java VM version is
1.5.0_06. This exception is not consistent, but it is not intermittent
either. It does not throw it at any particular point while rebuilding
the index, but it will throw this exception at some point (it
Hes Siemelink wrote:
It happens from time to time... but I don't know how to reproduce it.
Rebuilding this particular index unfortunately takes about 10 hrs, so it's
not feasable to delete the index and rebuild it when this happens... our
users would be missing a lot of search results then!
Hes Siemelink wrote:
It happens from time to time... but I don't know how to reproduce it.
Rebuilding this particular index unfortunately takes about 10 hrs, so
it's
not feasable to delete the index and rebuild it when this happens...
our
users would be missing a lot of search results
Hes Siemelink wrote:
Not making much progress, but there is one thing I found curious: very
often
the file that can not be found is _8km.fnm.
Is it possible to derive any information from this?
Hmmm, that's interesting. Segment numbers are just integers encoded
in base 36, ie, using the
Sunil Kumar PK wrote:
could you please explain?
On 10/26/06, Karel Tejnora [EMAIL PROTECTED] wrote:
Nope. IndexReader obtains a snapshot of index - not closing and opening
indexreader leads to not deleting files (windows exception, linux will
not free them).
Is it possible to get all the
The quick answer is: NFS is still problematic in Lucene 2.0.
The longer answer is: we'd like to fix this, but it's not fully fixed
yet. You can see here:
http://issues.apache.org/jira/browse/LUCENE-673
for gory details.
There are at least two different problems with NFS (spelled out in
Rajesh parab wrote:
Does anyone know if there is any plan in adding transaction support in Lucene?
I don't know of specific plans.
This has been discussed before on user dev lists. I know the
Compass project builds transactional support on top of Lucene.
Are you asking for transaction
Rajesh parab wrote:
I am talking about transaction support in Lucene only. If there is a failure
during insert/update/delete of document inside the index, there is no way to
roll back the operation and this will keep the index in in-consistent state.
OK, I see. Then you should also look at
Antony Bowesman wrote:
Hi,
I have the IndexWriter.infoStream set to System.out and get the following
merging segments _4m (2 docs) _4n (1 docs) into _4o (3 docs)
java.io.IOException: Cannot delete PathToDB\_29.cfs; Will re-try later.
java.io.IOException: Cannot delete PathToDB\_29.cfs; Will
unreferenced index
files correctly but this hasn't been released yet).
Mike
- Aleksander
On Thu, 31 Aug 2006 03:42:28 +0200, Michael McCandless
[EMAIL PROTECTED] wrote:
Stanislav Jordanov wrote:
For a moment I wondered what exactly do you mean by compound file?
Then I read http://lucene.apache.org
Aleksander M. Stensby wrote:
works like a charm Michael! (only thing is that SegmentInfos /
SegmentInfo are final classe, (which I didnt know) so i was bugging
around to really find the classes:) heh.
I was able to remove the broken segment. I must now get the MAX(id) from
the clean
Yonik Seeley wrote:
Actually, in previous versions of Lucene, it *was* possible to get way
too many first level segments because of the wonky logic when the
IndexWriter was closed. That has been fixed in the trunk with the new
merge policy, and you will never see more than mergeFactor first
This looks correct to me. It's good you are doing the deletes
in bulk up front for each batch of documents. So I guess you
hit the error ( 5000 segments files) while processing batches
of 200 docs (because you then optimize in the end)?
Do you search this index while it's building, or, only
Suman Ghosh wrote:
The search functionality must be available during the index build. Since a
relatively small number of documents are being affected (and also we
plan to
perform the build during a period of time we know to be relatively quiet
from last 2 years site access data) during the
Michael McCandless wrote:
Van Nguyen wrote:
I'm running this on Windows 2003 server (NTFS). The Java VM version is
1.5.0_06. This exception is not consistent, but it is not intermittent
either. It does not throw it at any particular point while rebuilding
the index, but it will throw
Otis Gospodnetic wrote:
Hi,
Is anyone running Lucene trunk/HEAD version in a serious production system?
Anyone noticed any memory leaks?
I'm asking because I recently bravely went from 1.9.1 to 2.1-dev (trunk from
about a week ago) and all of a sudden my application that was previosly
Yonik Seeley wrote:
On 11/30/06, Chris Hostetter [EMAIL PROTECTED] wrote:
: IndexSearchers open. The other ones I let go without an explicit
: close() call. The assumption is that the old IndexSearchers expire,
: that they get garbage collected, as I'm no longer holding references to
: them.
Otis Gospodnetic wrote:
Wow, that was fast - java-user support is just as fast as I heard! ;)
Well let's withhold judgment until we see if that tool really works
correctly :)
I'll try your patch shortly. Like I said, the bug may be in my application.
Here is a clue. Memory usage
Otis Gospodnetic wrote:
Yeah, in this case, I'm running out of memory, and open file descriptors are, I
think, just an indicator that IndexSearchers are not getting closed properly.
I've already increased the open file descriptors limit, but I'm limited to 2GB
of RAM on a 32-bit box.
I'll
Otis Gospodnetic wrote:
Hi Mike,
Thanks for looking into this. I think your stress test may match my production
environment.
I think System.gc() never guarantees anything will happen, it's just a hint.
I've got the following in one of my classes now. Maybe you can stick it in
your stress
Stanislav Jordanov wrote:
How much free disk space should be there (with respect to the index
size) in order for the optimize to complete successfully?
Good question!
Really this detail should be included in the Javadoc for optimize (and
more generally addDocument, addIndexes(*), etc.). I
Zhang, Lisheng wrote:
Hi,
I indexed first 220,000, all with a special keyword, I did a simple
query and only fetched 5 docs, with Hits.length()=220,000.
Then I indexed 440,000 docs, with the same keyword, query it
again and fetched a few docs, with Hits.length(0=440,000.
I found that search
[EMAIL PROTECTED] wrote:
Forgot something...
Also I got this exception, which may be related:
java.io.IOException: Cannot delete C:\dknewscenter\2\_5d.cfs
at
org.apache.lucene.store.FSDirectory.create(FSDirectory.java:319)
at
[EMAIL PROTECTED] wrote:
Hi,
In my test case, four Quartz jobs are starting each third minute storing
records in a database followed by an index update.
After doing a test run over a period of 16 hours, I got this exception
after 10 hours:
java.io.IOException: Access is denied
at
[EMAIL PROTECTED] wrote:
Thank you for quick and detailed answer.
In this system multiple threads will, occasionally, try to write and/ or
read the same index, hence the pause waiting for the lock. This is not a
good way to implement it and was done as a temp solution for debug
purposes only.
Harini Raghavan wrote:
I am using lucene 1.9.1 for search functionality in my j2ee application
using JBoss as app server. The lucene index directory size is almost 20G
right now. There is a Quartz job that is adding data to the index evey
min and around 2 documents get added to the index
Yonik Seeley wrote:
On 12/21/06, Michael McCandless [EMAIL PROTECTED] wrote:
Harini Raghavan wrote:
I am using lucene 1.9.1 for search functionality in my j2ee application
using JBoss as app server. The lucene index directory size is almost
20G
right now. There is a Quartz job
Harini Raghavan wrote:
Thank you for the response. I don't have readers open on the index, but
while the optimize/merge was running I was searching on the index. Would
that make any difference?
You're welcome! Right, a searcher opens an IndexReader. So this
means you should see peak @ 3X
Antony Bowesman wrote:
Hi,
I'm running load tests with Lucene 2.0, SUN's JDK 6 on Windows XP2, dual
core CPU. I have 8 worker threads adding a few hundred K documents,
split between two Lucene indexes, I've started getting
java.io.IOException: The handle is invalid in places like
Antony Bowesman wrote:
Hi Mike,
I saw Mike McCandless JIRA issue
http://issues.apache.org/jira/browse/LUCENE-669
Is the patch referenced there useful for a 2.0 system. I would like
to use the lockless commit stuff, but am waiting until I get the core
system working well.
I am also
S Edirisinghe wrote:
I'm having a write lock problem when I try to open an existing index.
When I try to open the index with the recreate set to false, I get this
exception
java.io.IOException: Lock obtain timed out: Lock@/tmp/lucene-
e683c0b3e52b8094bba62b22617efd41-write.lock
at
Hi all,
I would like to draw your attention to an open and rather devious
long-standing index corruption issue that we've only now finally
gotten to the bottom of:
https://issues.apache.org/jira/browse/LUCENE-140
If you hit this, you will typically see a docs out of order
Bhavin Pandya wrote:
What I want to do is:
if the index file exist, append document
if the index file does not exist, create a new , empty index file.
Please check Lucene api for IndexReader...
It has one method which u can use before opening
IndexWriter...indexExists(Directory
Doron Cohen wrote:
David [EMAIL PROTECTED] wrote on 15/01/2007 00:36:28:
Thanks, I think I did not describe my problem exactly.
What I want to do is:
if the index file exist, append document
if the index file does not exist, create a new , empty index file.
How can I implement that?
Marcel Morisse wrote:
I have a problem with Lucene and because I am little bit inexperienced,
I would like to ask you.
I have a database with ca. 2500 items in it. I store these items in a
RAMIndex and try to rebuild it every 10 minutes. I use the same
procedure like updating a FSDirectory -
maureen tanuwidjaja wrote:
I am indexing thousands of XML document,then it stops after indexing for
about 7 hrs
...
Indexing C:\sweetpea\wikipedia_xmlfiles\part-0\37027.xml
java.io.IOException: Lock obtain timed out: [EMAIL
PROTECTED]:\sweetpea\dual_index\DI\write.lock
Michael McCandless [EMAIL PROTECTED] wrote: maureen
tanuwidjaja
wrote:
I am indexing thousands of XML document,then it stops after indexing
for about 7 hrs
...
Indexing C:\sweetpea\wikipedia_xmlfiles\part-0\37027.xml
java.io.IOException: Lock obtain timed out: [EMAIL PROTECTED
DECAFFMEYER MATHIEU wrote:
Hi, I have exactly the same question.
Correct me if I'm wrong :
it seems that I can do any I/O operations on the index while querying
because of the open IndexReader.
So if I had the same situation as gui (the poster of the thread), I can
just delete the old index
Kadlabalu, Hareesh wrote:
Hi,
I am starting to work with Lucene 2.0 and I noticed that we can no
longer create an FSDirectory using a LockFactory.
Could someone point me to some discussion or documentation related to
locking and what has changed in terms of best practices? It appears that
the
Miles Efron wrote:
i seem to be having a problem analogous to this one (no answer that i see):
http://www.gossamer-threads.com/lists/lucene/java-user/32268?search_string=cannot%20overwrite;#32268
trouble is, i just put lucene on my new macbook pro and am having the
problem that if i
Miles Efron wrote:
You rule. Swapping out the nightly build seems to have fixed the
problem... tried it on two problematic cases and both worked.
Phew!
For the record, I'm running mac os 10.4.8.
Uh-oh, I can't explain why you would hit these errors on OS X 10.4.8;
we have only seen these
Miles Efron wrote:
I really don't know why os x could have induced those kinds of
filesystem issues. i assumed that since i had switched over to the
intel architecture that perhaps something was going on with the
JVM...everything involved in the process was mac; local filesystem, etc.
but
Josh Joy wrote:
I was implementing some calls to Lucene, though was
curious if there was
some documentation I was missing that indicated why a
method throws an
exception.
Example, IndexReader - deleteDocuments() - what is the
root cause as to
why it throws IOException?
I'm trying to
maureen tanuwidjaja wrote:
I would like to know about optimizing index...
The exception is hit due to disk full while optimizing the index and hence,the index has not been closed yet.
Is the unclosed index dangerous?Can i perform searching in such index correctly?Is the index built
maureen tanuwidjaja wrote:
May I also ask wheter there is a way to use writer.optimize() without
indexing the files from the beginning?
It took me about 17 hrs to finish building an unoptimized index(finish when I call IndexWriter.close() ).I just wonder wheter this existing index
On Tue, 20 Feb 2007 10:36:55 +0100, jm [EMAIL PROTECTED] said:
I updated my code to use 2.1 (IndexWriter deleting docs etc), and when
using native locks I still get a lock like this:
lucene-2361bf484af61abc81e6e7f412ad43af-n-write.lock
and when using SimpleFSLockFactory:
1 - 100 of 2570 matches
Mail list logo