have to worry about last day of the month or
timestamp precision. In fact, since the field is actually a timestamp,
the between posted above won't work correctly.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgp5eYJsdKDrX.pgp
Description: PGP signature
02-01
> 00:00:00'::timestamp without time zone))
> Total runtime: 10079.256 ms
>
> Strange!!! Why does the index not works?
It's unlikely that it's going to be faster to index scan 2.3M rows than
to sequential scan them. Try setting enable_seqscan=false and see if i
Please reply-all so others can learn and contribute.
On Sun, Jul 29, 2007 at 09:38:12PM -0700, Craig James wrote:
> Decibel! wrote:
> >It's unlikely that it's going to be faster to index scan 2.3M rows than
> >to sequential scan them. Try setting enable_seqscan=false a
'll use a delay of 10ms on good disk hardware, and 20ms on
slower hardware.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ly lower the priority
> of the process, though ..
The OS will only lower it to a certain extent.
Also, make sure you understand the concept of priority inversion before
going into production with this solution.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpwzHh5wWykY.pgp
Description: PGP signature
point in time that fsync() is called, regardless of what
files/information the journal contains. Fortunately I think it's common
knowledge to mount PostgreSQL filesystems with data=writeback, which
hopefully eliminates much of that bottleneck... but if you don't do
noatime you
n of
1k per row, 5M rows means 5G of data, which might well not fit in
memory. And if the entire table's been updated just once before, even
with vacuuming you're now at 10G of data.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpC7cdBrUEbs.pgp
Description: PGP signature
On Tue, Aug 07, 2007 at 02:36:18PM -0500, Scott Marlowe wrote:
> On 8/7/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > On Tue, Aug 07, 2007 at 02:33:19PM +0100, Richard Huxton wrote:
> > > Mark Makarowsky wrote:
> > > >I have a table with 4,889,820 records in it.
nd how it
handles MVCC. In some cases, splitting a frequently updated row out to a
separate table might not gain as much once we get HOT, but it's still a
good tool to consider. Depending on what you're doing another useful
technique is to not update the field as often by logging updates to b
xpect that setting fillfactor to 100 will encourage indexs to bloat
> faster, and would only be recommended if you didn't expect the index contents
> to change?
Yes.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgp4WeZLcqCcp.pgp
Description: PGP signature
If you haven't messed with indexed fields, it
doesn't have to touch those either.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgplqWULgqzjL.pgp
Description: PGP signature
to control your growth rate?
How well will the existing design scale out? (Often times what is a good
design for a smaller set of data is sub-optimal for a large set of
data.)
Something else that might be worth looking at is having your existing
workload modeled; that allows build
t you'll have some kind of backup server available, which
makes an extra controller much less useful.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpDQ9nJ7lGKI.pgp
Description: PGP signature
On Thu, Aug 09, 2007 at 08:58:19PM -0500, Scott Marlowe wrote:
> > On Thu, Aug 09, 2007 at 05:29:18PM -0500, Scott Marlowe wrote:
> > > On 8/9/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > >
> > > > Also, a good RAID controller can spread reads out acro
to benchmark both approaches with your
actual application.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgp3mph03PXnn.pgp
Description: PGP signature
;m pretty sure it actually says "RAID10" in the BIOS,
> but is this a lie?
Unless they use the "plus notation" (ie: RAID 1+0 or RAID 0+1), you
never truly know what you're getting.
BTW, there's other reasons that RAID 0+1 stinks, beyond just
performance.
--
ehouse, I'd put a real
low priority on testing RAID5... it's rarely a good idea for a database.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpAVdtC8cRhR.pgp
Description: PGP signature
oller, and it can cache writes (it has a
BBU), I'd actually lean towards putting all 12 drives into one raid
10. A good controller will be able to handle WAL fsyncs plenty fast
enough, so having a separate WAL mirror would likely hurt more than
help.
--
Decibel!, aka
for
many systems, a write-caching controller will be very similar in
performance to async commit.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpoy8I6Hfg5B.pgp
Description: PGP signature
aster than messing around
with pg_dump.
What kind of disk hardware is this running on? A good raid 10 array with
write caching should be able to handle a 200G database fairly well; at
least better than it is from what I'm hearing.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpIHiEv5ElNs.pgp
Description: PGP signature
On Mon, Aug 27, 2007 at 04:56:33PM -0500, Kevin Grittner wrote:
> >>> Decibel! <[EMAIL PROTECTED]> 08/27/07 4:00 PM >>>
> > > > They're running version 8.1.4
> >
> > As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *
ling pagecache and the IO smoothing it was designed
> >>to perform?
> >
> >You are working at the correct level. The bgwriter performs the I/O
> >smoothing
> >function at the database level. Obviously, the OS level smoothing
> >function
> >needed
not working fine? Maybe database should need
> a restart? I really don't know.
No, it sounds to me like you just weren't vacuuming aggressively enough
to keep up with demand.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.co
y turn off nested loops, what side
effects would
we have to expect?
Are there more granular ways to tell the query planner when to use
nested
loops?
Or just other ideas what to do? We'd be grateful for any hint!
Here's what's killing you:
-> Nested Loop (c
is small enough to fit in-memory, your IO subsystem
is almost certainly going to kill you. Even if it does fit in memory, if
you're doing much writing at all you're going to be in big trouble.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://e
from the ISP.
Get yourself the ability to benchmark your application. This is
invaluable^W a requirement for any kind of performance tuning.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpnCHpHesYZG.pgp
Description: PGP signature
ssis == lots of
throughput.
Finally, if you do get a SAN, make sure and benchmark it. I've seen more
than one case of a SAN that wasn't getting anywhere near the performance
it should be, even with a simple dd test.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgprqUCHZieqB.pgp
Description: PGP signature
u live somewhere without thunderstorms? This is a regular event in
Actually, he does. :) Or at least I don't think Portland gets a lot of
t-storms, just rain by the bucketful.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.94
t room; so if there is a strike it's far more likely
that I'll lose switches and not hardware.
> Then of course there are the *other* risks, such as the place burning to
> the ground, or getting drowned by a break in the city reservoir that's
> a couple hundred
width is the bottleneck.
Actually, in this case, I suspect that latency will be far more critical
than overall bandwidth. I don't know if it's inherent to Gig-E, but my
limited experience has been that Gig-E has higher latency than 100mb.
--
Decibel!, aka Jim Nasby
4G of total memory in which case
I subtract less.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpbj9aYHBI3X.pgp
Description: PGP signature
On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:
> It is tricky for me to find a big enough file to test. I tried one of the
dd if=/dev/zero of=bigfile bs=8192 count=100
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB h
re doing something wrong. :) Run lazy
vacuum frequently enough (better yet, autovacuum, but cut all of 8.1's
autovac parameters in half), and make sure your FSM is big enough
(periodic vacuumdb -av | tail is an easy way to check that).
Try a REINDEX. VACUUM FULL is especially hard on th
but from what I've seen, memory seems to
be fairly independent of CPU frequency now-a-days, so I don't think
there's a huge difference there.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgp0JJsI29xNu.pgp
Description: PGP signature
On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:
> On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:
> >Also, to reply to someone else's email... there is one big reason to use
> >a SAN over direct storage: you can do HA that results in 0 data lo
t your FSM is too small.
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
pgpCY6kMFsQLb.pgp
Description: PGP signature
On Wed, Sep 12, 2007 at 12:02:46AM +0100, Gregory Stark wrote:
> "Decibel!" <[EMAIL PROTECTED]> writes:
>
> > On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:
> >> It is tricky for me to find a big enough file to test. I tried one of the
>
x27;s been fixed for 8.3.
Actually, that's a bit over-conservative... what happens prior to 8.3 is
that CLUSTER rewrites the table using it's XID for everything. That can
break semantics for any transactions that are running in serializable
mode; if you're just using the d
meone
> through what they need to know about their tables first and then going
> into how vacuum works based on that data.
Take a look at the stuff at http://decibel.org/~decibel/pervasive/, it'd
hopefully provide a useful starting point.
--
Decibel!, aka Jim C. Nasby,
So how is it that linux thinks that 30G is cached?
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
pgp9JORJbJrah.pgp
Description: PGP signature
ared buffers should be in use.
I'm leaning towards "top on linux == dumb".
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ncluding shared memory as 'cached', then no, the
information it's providing is not correct.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
---(end of broadca
ibution up-
to-date... ;)
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
separate table (vertical partitioning). That will make the main table
much easier to vacuum, as well as reducing the impact of the high
churn rate.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
ns every 1-5 minutes.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
On Nov 19, 2007, at 9:23 AM, Tom Lane wrote:
Decibel! <[EMAIL PROTECTED]> writes:
FWIW, 20k rows isn't all that big, so I'm assuming that the
descriptions make the table very wide. Unless those descriptions are
what's being updated frequently, I suggest you put thos
even more expensive.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
be oughta. :) Though
I'd probably just call it ideal.
Having said all that, this looks highly useful!
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
ly set shared_buffers to 10% of memory...
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
insert.
This is a case where a cron'd vacuum that runs once a minute is
probably a wise idea.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
from psql), and then
SELECT sum(relpages) FROM pg_class c JOIN pg_namespace n ON
(c.relnamespace=n.oid) AND n.nspname='pg_temp_blah';
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
rounds in per second; with default cost settings that means you'd be
able to actually vacuum about 50 dirty pages per second, tops. Of course
not all pages will be dirty, but still...
I normally use between 10 and 20 for cost_delay (lower values for faster
drive arrays).
> autovacuum_vacuum_
pty array is
considered to be NULL; as such the comments above would apply.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
pgpjPrQhlOQDS.pgp
Description: PGP signature
that's the actual expense in the exception
block) is fairly CPU-intensive, but it's not common for a database
server to be CPU-bound, even for OLTP. You're usually still waiting
on disk.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your
agree with others though: it certainly doesn't sound like there's
any reason to be using temp tables here at all. This sounds like a
case of trying to apply procedural programming techniques to a
database instead of using set theory (which generally doesn't work
well).
--
n possible.
If you do this *please* post it. I really think it would be worth
while for us to have fixed-size data types for common forms of binary
data; MD5, SHA1 and SHA256 come to mind.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some b
f access. A more realistic workload would probably have a Zipfian
distribution of account number touches, and might look a little better
on this type of test.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
y was a bit longer but this is the
part that made the difference) after a little change the query took
~1 second:
select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and
t1.id = t2.id;
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer
from heapgetnext()
I forget the details of (2). As I recall, if we fall off of the
assumption, the penalty for long scans get large-ish (maybe 2X), but
since when do people full table scan when they're updates/inserts are
so scattered across TIDs? It's an obvious big win
big disadvantage for data
warehouses; though perhaps the way to fix that is recommend a
backgrounded vacuum after data load.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
ter
then first one which surprising to me.
The second index scan wasn't done in the second query because you
don't have the second IN clause. And it's actually the 1st query that
was faster, because it returned fewer rows (15k instead of 45k).
--
Decibel!, aka Jim C. Nasby, Da
;re
using. If you're doing random writes that perfectly align with the
raid stripe size, you'll see virtually no RAID5 overhead, and you'll
get the performance of N-1 drives, as opposed to RAID10 giving you N/2.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROT
y a sequence and another field that's insert
timestamp; those two fields would correlate highly, and you should
even be able to correlate the two histograms; that would allow you to
infer that most of the insert times for _id's between 100 and 200
will be between 2008-01-01 00:10 and 20
at I think about it the last time this was brought up
there was discussion about something that doesn't force a particular
execution method, but instead provides improved information to the
planner. It might be worth pursuing that, as I think there was less
opposition to it.
--
Decibel!, a
ny good controller should take that
approach.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
ease-of-
management of a database, but keeps the data away from your
production data.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
f you have a defined set of points that you need to report
on, create a materialized view that has that information.
BTW, it would probably be better to store data either in the main
table, or the history table, but not both places.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROT
uild arrays of composite types, that might be
an easy way to deal with this stuff. Create a composite type of
(name_id, value) and store that in an array.
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
can never be used (we're actually working on one right now, but it
will only be used to occasionally pull up single entities), but you
have to be really careful with it. I don't see it working very well
for what it sounds like you're trying to do.
--
Decibel!, aka Jim C. Nas
to allow for changing the definition of the composite type?
--
Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
smime.p7s
Description: S/MIME cryptographic signature
st (partno). AFAIK it's considerably slower
to scan.
Actually, that's not necessarily true. If both partno and procuder_id
are ints and you're on a 64bit platform, there won't be any change in
index size, due to alignment issues.
--
Decibel!, aka Jim C. Nasby, Database Archi
tions do I have.
FWIW, you can avoid that with SET LOCAL (though it would still affect
the rest of the transaction).
You could also store whatever enable_seqscan was set to in a variable
before setting it to false and then set it back when you're done.
--
Decibel!, aka Jim C. Nasby, Data
use EXECUTE, but building the query plan
is a serious pain in the rear.
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.or
;t
think FBSD would allow that, but if it does it wouldn't surprise me
if kernel/OS performance stunk. If Postgres then used the same
settings it would make matters even worse (IIRC there is some code
that's different in an AMD vs Intel build).
--
Decibel!, aka Jim C. Nasby, Data
l help if you report number of cpus when you respond back to
help collect feedback.
Do you have a self-contained test case? I have several boxes with 16-
cores worth of Xeon with 96GB I could try it on (though you might not
care about having "only" 16 cores :P)
--
Decibel!, ak
, isn't there a DTrace probe that would provide that info? It
certainly seems like something you'd want to know...
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-performance ma
peat test, try
compiling a complex system (such as FreeBSD) with different levels of
-j handed to make (of course you'll need to wait until everything is
in cache, and I'm assuming you have enough memory so that everything
would fit in cache).
--
Decibel!, aka Jim C. Nasby, Datab
good news is there's now at least enough runnable procs. What I
find *extremely* odd is the CPU usage is almost dead constant...
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via
te temp files that could fit in
work_mem...
Are you using temp tables? Those end up in pgsql_tmp as well.
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-performance mailing list (pgsq
ies (=, <>, <, >) as well.
If you want an example of that, we had Command Prompt create a full
set of hash datatypes (SHA*, and I think md5). That stuff should be
on pgFoundry; if it's not drop me a note at jna...@cashnetusa.com and
I'll get it added.
--
Decibel!,
ies (=, <>, <, >) as well.
If you want an example of that, we had Command Prompt create a full
set of hash datatypes (SHA*, and I think md5). That stuff should be
on pgFoundry; if it's not drop me a note at jna...@cashnetusa.com and
I'll get it added.
--
Decibel!,
very large number of rows
(probably at least 10M), the overhead of a text datatype over a
smallint or int/oid gets to be very large.
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-p
..., NULL::"unknown" AS
task_amount
FROM loan_tasks_pending ltp
JOIN loan_task_codes ltc ON ltp.loan_task_code_id = ltc.id;
The stuff I omitted is just some fields and a few other NULLs. This
is 8.2.9.
--
Decibel! [EMAIL PROTECTED] (512) 569-9461
--
Sent via pgsql-performance mai
On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
loan_tasks effectively does SELECT * FROM loan_tasks_committed UNION
ALL SELECT * FROM loan_tasks_pending;.
You seem to have neglected to mention a join or two.
Yeah,
On Nov 10, 2008, at 12:21 PM, Richard Huxton wrote:
Jim 'Decibel!' Nasby wrote:
On Nov 10, 2008, at 7:06 AM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
loan_tasks effectively does SELECT * FROM loan_tasks_committed
UNION
ALL SELE
On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
Here's the commands to generate the test case:
create table a(a int, b text default 'test text');
create table c(c_id serial primary key, c_text text);
in
On Nov 10, 2008, at 9:20 PM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
On Nov 10, 2008, at 1:31 PM, Tom Lane wrote:
On my machine this runs about twice as fast as the original view.
Am I missing some magic? I'm still getting the subq
On Nov 11, 2008, at 1:15 PM, Tom Lane wrote:
"Jim 'Decibel!' Nasby" <[EMAIL PROTECTED]> writes:
On Nov 10, 2008, at 9:20 PM, Tom Lane wrote:
8.3 gets it right though.
Doesn't seem to for me... :/
Oh, I was looking at "select * from v2" not "s
88 matches
Mail list logo