On 24 June 2013 10:48, sathish_ix skandhasw...@inautix.co.in wrote:
Hi,
Can write the query like this way ?
I need to append *** to returned field, is this possible.
account_number has last 4 character
account_name has full name of the account
I need to append *** front of account number.
Hi Aaron,
Are you talking about Securing Lucene Index ?
If so You can try using https://code.google.com/p/lucenetransform/.
Thanks and Regards
Vignesh Srinivasan
9739135640
On Mon, Jun 24, 2013 at 11:21 AM, Aaron Greenspan
aar...@thinkcomputer.comwrote:
Hi,
Some more unsolicited feedback
I was reading this post:
http://myjeeva.com/upgrade-migrate-solr-3x-to-solr-4.html
However, under step 6 and 7 I'm unsure what to do:
Step 6: So remove old configuration section and add new one indexConfig
Do I need to remove the entire nodes indexDefaults and mainIndex? What
should I do with
You might want to read up on Jetty webserver security if that is what you
are using for the web container.
K
On Mon, Jun 24, 2013 at 12:16 AM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
Thanks!
1. shards.tolerant=true works, shouldn't this parameter be default?
A whole shard being unavailable is a big deal. The default behavior
should not hide such a condition. Some people may be willing to take a
To change Solr's default port number just pass -Djetty.port= on the
command line, works a treat.
As Solr is deployed as a web-app, it is assumed that the administrator
would be familiar with web apps, servlet containers and their security, if
not, then that is something you need to
Or use update processors, such as the script update processor. Have one
field that is for needing purposes, and another that is for
storage/display.
If the point is to obscure the account number, shipping it in the HTML
and having CSS do it is too late, isn't it?
Upayavira
On Mon, Jun 24, 2013,
On Mon, Jun 24, 2013 at 11:55 AM, PeterKerk vettepa...@hotmail.com wrote:
I was reading this post:
http://myjeeva.com/upgrade-migrate-solr-3x-to-solr-4.html
However, under step 6 and 7 I'm unsure what to do:
Step 6: So remove old configuration section and add new one indexConfig
Do I need
On 24 June 2013 13:10, Upayavira u...@odoko.co.uk wrote:
Or use update processors, such as the script update processor. Have one
field that is for needing purposes, and another that is for
storage/display.
If the point is to obscure the account number, shipping it in the HTML
and having CSS
Hi
I have a synonyms file that looks like this:
finagle = æggeblomme
frumpy = spiste
canard, æggeblomme
corpse, spiste
(It's just an example, and has no real meaning).
The issue I don't understand is that a search for finagle does not find
documents containing æggeblomme (which means egg
You can use Apache Nutch to crawl local file systems as well and indexing to
Solr as one would otherwise do.
Cheers
-Original message-
From:Sourabh107 sourabh.jain@gmail.com
Sent: Sunday 23rd June 2013 17:12
To: solr-user@lucene.apache.org
Subject: Solr File System Search
I
hello,
I'm a long-time user of Lucene, and have some questions about SOLR.
1. Is it possible to give actual Lucene queries to SOLR, bypassing any
SOLR-side QueryParsing ?
2. Are there differences in functionality or implementation between Faceted
Search in Lucene and SOLR ?
3. Is it possible
Now, each doc looks like this (i generated random user text in the freetext
columns in the DB)
doc str name=PackageNameWe have located the ship./str arr name=
CatalogVendorPartNum strd1771fc0-d3c2-472d-aa33-4bf5d1b79992/str str
b2986a4f-9687-404c-8d45-57b073d900f7/str str
*Hi solr users
I'm using solr 4.2.1 and I have some questions about soft commit and
spellcheckers.
I see dictionary rebuild going on after softcommit, for my application
it is acceptable to rebuild the index once a day, so I tried
to switch buildOnCommit parameter to false in
On 06/23/2013 05:53 AM, Shalin Shekhar Mangar wrote:
Use shards.tolerant=true to return documents that are available in the
shards that are still alive.
Beware that currently shards.tolerant=true prevents grouping and facets :
https://issues.apache.org/jira/browse/SOLR-3369
--
André
If you wait for Solr 4.4, then there will be a new behavior for solr.xml. It
will no longer list the cores, but do auto-discovery, so no need to modify it
when you add/remove cores. All you need to do is drop a new core folder in
there and Solr will pick it up. See
Thanks for bringing that up Andre. I'll take a look at the patches.
2013/6/24 Andre Bois-Crettez andre.b...@kelkoo.com:
On 06/23/2013 05:53 AM, Shalin Shekhar Mangar wrote:
Use shards.tolerant=true to return documents that are available in the
shards that are still alive.
Beware that
Il 24/06/13 13:26, Mysurf Mail ha scritto:
Now, each doc looks like this (i generated random user text in the freetext
columns in the DB)
doc str name=PackageNameWe have located the ship./str arr name=
CatalogVendorPartNum strd1771fc0-d3c2-472d-aa33-4bf5d1b79992/str str
Just had an odd scenario in our current Solr system (4.3.0 + SOLR-4829
patch), 4 shards, 2 replicas (leader + 1 other) per shard spread across 8
machines.
We sent all our updates into a single instance, and we shutdown a leader
for maintenance, expecting it to failover to the other replica. What
Inline below...
On Jun 24, 2013, at 07:16 , heikki wrote:
hello,
I'm a long-time user of Lucene, and have some questions about SOLR.
1. Is it possible to give actual Lucene queries to SOLR, bypassing any
SOLR-side QueryParsing ?
No, not directly as Query objects, but Solr's default
Try the Solr Admin UI analysis page and see how finagle and æggeblomme are
analyzed at BOTH index and query time.
The = rule does a replacement, while the pure comma rules support
equivalence.
Your query-time and index-time analyzers need to be compatible, which
sometimes means that they
As a bit of background, we run a setup (coming from 3.6.1 to 4.2 relatively
recently) with a single master receiving updates with three slaves pulling
changes in. Our index is around 5 million documents, around 26GB in size
total.
The situation I'm seeing is this: occasionally we update the
A bunch of replication related issues were fixed in 4.2.1 so you're
better off upgrading to 4.2.1 or later (4.3.1 is the latest release).
On Mon, Jun 24, 2013 at 6:55 PM, Neal Ensor nen...@gmail.com wrote:
As a bit of background, we run a setup (coming from 3.6.1 to 4.2 relatively
recently)
I don't get any results with has (inflections). Why?
Wildcard patterns on strings are literal, exact. There is no automatic
natural language processing.
You could try a regular expression match:
q=/ ha(s|ve) /
Or, just use OR:
q=*has* OR *have*
Or, use a copyField of the package name to a
It will take a short bit of a time before a new leader takes over when a leader
goes - that's expected - how long it takes will vary. Some things will do short
little retries to kind of deal with this, but you are alerted those updates
failed, so you have to deal with that as you would other
Thanks Mark. Yes, I expected some finite time for the leader to take over,
just hadn't realized/comprehended that Jetty was already shutdown by this
point... Yes, I suppose the container has to stop sending requests to the
context before it can shut the context down, so that's the window where
Thanks for your answer Jack krupansky.
Here is my Request handler :
requestHandler name=/dataimport
class=org.apache.solr.handler.dataimport.DataImportHandler
lst name=defaults
str name=configdata-config.xml/str
/lst
/requestHandler
Looking similar to yours
Here is my
Thanks for your answer Gora.
My whole idea is to create a webapp in which user can configure there
locations and search whatever they want in that locations...shall i able to
achieve it using Apache Solr?
Can you please any existing platform on which i can create my webapp?...i
can write a Java
currently I am using text_general.
I want to search with user free text search, therefor I would like
tokenization, stemmings ...
How do I define stemmers?
Should I use text_en instead of text_general?
Thank you.
I won’t continue to bore annoy anybody on this list with tedious comments about
my new Solr book on Lulu.com... please bookmark my blog,
http://basetechnology.blogspot.com/, for further updates on the book.
The book itself is here:
Thanks Jack and Giovanni.
Jack:
Regarding 1.b. have vs *have* the results were identical apart from the
score.
Basically i cant do all the stuff you recommended. I want a stemmer for an
unknown search (send the query when user enters free text to a textbox ).
giovanni- regarding requestHandler
Regarding
There is no *: feature to query all fields in Solr
When I enter the dashboard - solr/#/[core]/query
the default is *:*
and it brings everything.
On Mon, Jun 24, 2013 at 5:41 PM, Mysurf Mail stammail...@gmail.com wrote:
Thanks Jack and Giovanni.
Jack:
Regarding 1.b. have vs *have*
http://lmgtfy.com/?q=jetty+access+control
wunder
On Jun 23, 2013, at 10:51 PM, Aaron Greenspan wrote:
Hi,
Some more unsolicited feedback since my last experience setting up Solr…
I am concerned that having a duplicate copy of a large part of my database up
on the internet at a
Yes, *:* is a special feature that directly translates into a special
Lucene query feature - MatchAllDocsQuery, but the general *:abc syntax is
not supported.
I'm sure that at some point general *: support will be added to Solr, but
it is not there today - even though LucidWorks Search does
On Jun 24, 2013, at 12:51 AM, Aaron Greenspan aar...@thinkcomputer.com wrote:
all of them are terrible,
it looks like you can edit some XML files (if you can find them)
The wiki itself is full of semi-useless information, which is pretty
infuriating since it's supposed to be the best
Currently, I cant describe my unique key with indexed false.
As I understand from the docs the field attribute indexed should be true
only if i want the field to be searchable or sortable.
Let's say I have a schema with id and name only, wouldn't I want the
following configuration
id - indexed
We have a solr cloud installation and when I execute a query that returns
an empty resultset because of the filter queries applied, a
NullPointerException is thrown.
This is the error msg:
Solr HTTP error: OK (500)
{error:{trace:java.lang.NullPointerException\n\tat
To enforce uniqueness, Solr needs to be able to search on the id to see if it
is currently in the index.
-Michael
-Original Message-
From: Mysurf Mail [mailto:stammail...@gmail.com]
Sent: Monday, June 24, 2013 11:52 AM
To: solr-user@lucene.apache.org
Subject: why does the uniqueKey has
And here is our most recent experience with G1, although not with
Solr, but with HBase:
http://blog.sematext.com/2013/06/24/g1-cms-java-garbage-collector/
Otis
--
Solr ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm
On Fri, Jun 21, 2013 at
its a little frustrating to see the smug responses to your query
and its fair to say the solr security situation could be *improved*
this JIRA ticket is worth reading
https://issues.apache.org/jira/browse/SOLR-4470
in short
-it is possible to restrict access to solr nodes using connection
Aaron, if public access is needed, most people just need to query Solr, not
update it. We tend to do this with reverse proxies. With a proxy you can
whitelist with the request handler and query params that are visible to the
outside world. You can use invariants to restrict many things even
We are looking to setup SolrCloud in multiple locations. For now, we will
assume that the data in one center should match the data in another
datacenter.
Is this the correct type of setup?
Setup a separate SolrCloud cluster and ZooKeeper quorum in each data
center? Configure cores and
Interesting. It seems to spend more time in GC, but the major GCs aren't any
faster. They are more consistent.
I notice that SPM shows average collection time. This is not a particularly
useful number. It should use median and percentiles.
For a one-sided distribution, never use mean
Hi Kevin,
From what I have gleaned, inter-datacenter Solr replication is not directly
supported. Solr Cloud relays each write request to each active node in a
shard before returning a response, so the round trip time between your
datacenters will be part of the response time for writes.
Using
The general idea is that tokenization can generally be done in a
language-independent manner, but stemming, synonyms, stop words, etc. must
be done in a language-dependent manner.
So, yes, text_en is a better starting point for adding in the more advanced
language processing features.
--
Can anybody please let me know how to index tt_content table or body text of
page in typo3.Can you please provide typoscript for indexing it.I am using
solr 4.2.0 and extension 2.8.0 and tomcat 6.0 and typo3 4.7.11.Its very
urgent.
Many thanks in advance.
--
View this message in context:
Hi,
It's with regards to generating a query string for Solr.
I am looking for a solution where I can create the query string like
'q=name:ipod AND cat=music AND features=coolfacet=onfacet.field=cat'
I understand, we may not be able to use lucene query API directly . Is
there any other library
Yup, known stuff, on TODO.
Otis
--
Solr ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm
On Mon, Jun 24, 2013 at 12:50 PM, Walter Underwood
wun...@wunderwood.org wrote:
Interesting. It seems to spend more time in GC, but the major GCs aren't any
I'm currently running solr 4.0 final with manifoldcf 1.3 dev on tomcat 7.
I need to capture the h1 tags on each web page as that is the true title
for the lack of a better word.
I can't seem to get it to work at all.
I read the instructions and used the capture component and then mapped it to
a
I'm seeing this message in the logs and it seems weird to me that the
instance needs to wait to see more replicas.
2013-06-24 18:12:40,408 [coreLoadExecutor-4-thread-1] INFO
solr.cloud.ShardLeaderElectionContext - Waiting until we see more
replicas up: total=2 found=1 timeoutin=139368
Can
Ok, I figured it out:
you need to add this too:
str name=captureAttrtrue/str
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-do-I-capture-h1-tags-tp4072792p4072798.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 24 June 2013 23:14, Ashwin Tandel ashwintan...@gmail.com wrote:
Hi,
It's with regards to generating a query string for Solr.
I am looking for a solution where I can create the query string like
'q=name:ipod AND cat=music AND features=coolfacet=onfacet.field=cat'
I understand, we may not
On 24 June 2013 21:21, Mysurf Mail stammail...@gmail.com wrote:
Currently, I cant describe my unique key with indexed false.
As I understand from the docs the field attribute indexed should be true
only if i want the field to be searchable or sortable.
Let's say I have a schema with id and
Thanks Gora.
On Mon, Jun 24, 2013 at 1:28 PM, Gora Mohanty g...@mimirtech.com wrote:
On 24 June 2013 23:14, Ashwin Tandel ashwintan...@gmail.com wrote:
Hi,
It's with regards to generating a query string for Solr.
I am looking for a solution where I can create the query string like
On 24 June 2013 20:07, Jack Krupansky j...@basetechnology.com wrote:
I won’t continue to bore annoy anybody on this list with tedious comments
about my new Solr book on Lulu.com... please bookmark my blog,
http://basetechnology.blogspot.com/, for further updates on the book.
[...]
Speaking
Hi,
It's with regards to generating a query string for Solr.
I am looking for a solution where I can create the query string like
'q=name:ipod AND cat=music AND features=coolfacet=onfacet.field=cat'
Is there any library through which we can create this query string.
I came across Solrj but
+1
On Mon, Jun 24, 2013 at 2:35 PM, Gora Mohanty g...@mimirtech.com wrote:
On 24 June 2013 20:07, Jack Krupansky j...@basetechnology.com wrote:
I won’t continue to bore annoy anybody on this list with tedious
comments about my new Solr book on Lulu.com... please bookmark my blog,
On 25 June 2013 00:07, Ashwin Tandel ashwintan...@gmail.com wrote:
Hi,
It's with regards to generating a query string for Solr.
I am looking for a solution where I can create the query string like
'q=name:ipod AND cat=music AND features=coolfacet=onfacet.field=cat'
Is there any library
Hello All,
I have the following set up of solr cloud.
* solr version 4.3.1
* 3 node solr cloud + replciation factor 2
* 3 zoo keepers
* load balancer in front of the 3 solr nodes
I am seeing this strange behavior when I am indexing a large number of
documents (10 mil). When I have more than 3-5
Another way of overriding nutch fields is to modify solrindex-mapping.xml file.
hth
Alex.
-Original Message-
From: Jack Krupansky j...@basetechnology.com
To: solr-user solr-user@lucene.apache.org
Sent: Sun, Jun 23, 2013 12:04 pm
Subject: Re: document id in nutch/solr
Add the
Hi,
Does Solr provides any java API/Client library to generate (e)dismax Query.
for eg. using Java Code/API I wish to generate query like
'q=videodefType=edismaxqf=features^20.0+text^0.3bq=cat:electronics^5.0'
Thanks in Advance,
Ashwin
From the SolrJWiki:
http://wiki.apache.org/solr/Solrj#Advanced_usage
SolrServer server = getSolrServer();
SolrQuery solrQuery = new SolrQuery().
setQuery(ipod).
setFacet(true).
setFacetMinCount(1).
setFacetLimit(8).
This is a safety mechanism - you can turn it off by configuring leaderVoteWait
to 0 in solr.xml.
This is meant to protect the case where you stop a shard or it fails and then
the first node to get started back up has stale data - you don't want it to
just become the leader. So we wait to see
SolrJ:
SolrQuery query = new SolrQuery();
query.setQuery(video);
query.setParam(defType, edismax);
query.setParam(qf, features^20.0+text^0.3);
query.setParam(bq, cat:electronics^5.0);
QueryResponse queryResponse = getSolrServer().query(query);
-- Jack Krupansky
-Original Message-
Hello everyone,
Apart from text search, can we use Solr as data store to serve data to form
analytics with drilldown charts or charts to add as widgets on dashboards?
Any suggestion, examples?
Thanks,
Pradeep
Hi,
I have the same issue too, and the deploy is quasi exact like than mine,
http://lucene.472066.n3.nabble.com/updating-docs-in-solr-cloud-hangs-td4067388.html#a4067862
With some concurrence and batches of 10 solr apparently have some deadlock
distributing updates
Can you dump the
Ok, thanks Mark - makes sense to have this, so I'll just lower it a bit.
Cheers,
Tim
On Mon, Jun 24, 2013 at 2:31 PM, Mark Miller markrmil...@gmail.com wrote:
This is a safety mechanism - you can turn it off by configuring
leaderVoteWait to 0 in solr.xml.
This is meant to protect the case
On 6/24/2013 1:27 PM, Ashwin Tandel wrote:
Does Solr provides any java API/Client library to generate (e)dismax Query.
for eg. using Java Code/API I wish to generate query like
'q=videodefType=edismaxqf=features^20.0+text^0.3bq=cat:electronics^5.0'
You can use the SolrJ library.
I expect it won't be fast enough for general use. Most analytics stores
implement functions inside the server to aggregate large amounts of data. There
is always some query that returns the whole database in order to calculate an
average.
I'm sure it will work fine for some things and for
If you use the edismax query parser, the uf parameter can be set to
restrict the fields that the user can directly reference.
-- Jack Krupansky
-Original Message-
From: Mysurf Mail
Sent: Monday, June 24, 2013 11:51 AM
To: solr-user@lucene.apache.org
Subject: why does the has to be
Here is the ulimit -a output:
core file size (blocks, -c) 0 data seg size(kbytes,
-d) unlimited scheduling priority (-e) 0 file size
(blocks, -f) unlimited pending signals
(-i) 179963 max locked memory(kbytes, -l) 64 max
Vinay,
What autoCommit settings do you have for your indexing process?
Jason
On Jun 24, 2013, at 1:28 PM, Vinay Pothnis poth...@gmail.com wrote:
Here is the ulimit -a output:
core file size (blocks, -c) 0 data seg size(kbytes,
-d) unlimited scheduling priority
I have 'softAutoCommit' at 1 second and 'hardAutoCommit' at 30 seconds.
On Mon, Jun 24, 2013 at 1:54 PM, Jason Hellman
jhell...@innoventsolutions.com wrote:
Vinay,
What autoCommit settings do you have for your indexing process?
Jason
On Jun 24, 2013, at 1:28 PM, Vinay Pothnis
You may want to look at something like http://www.crawl-anywhere.com/
. I don't think they have file-based crawlers, but they want one
anyway. You may find it faster to contribute to their project than
figuring out your own.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Hello,
I am newbie to solr.
I am trying out partial search (match). My experience is opposite of
http://lucene.472066.n3.nabble.com/string-field-does-not-yield-exact-match-result-using-qf-parameter-td4060096.html
When I add 'qf' to to dismax query I get no result unless there's a full
match.
I
Yeah, perhaps yet people keep using it for this. So, Pradeep, it
may work for you and if you share some numbers with us we may be able
to tell you no way or very likely OK. :)
Otis
--
Solr ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm
On
Hi,
I am adding around 100 million records to SOLR using SOLRJ. I am not
performing commit operation until I add all the documents to SOLR. I see
that my program adds the docs very fast (1 million per minute) for around 18
million documents which is expected result but after 18 million records
Hi Shawn,
Thanks for your reply. I removed hardcoded dataDir from solr.xml and created
different solr.xml per Solr instance. After these changes, the SPLITSHARD
command returned successfully.
Before splitting, shard1 was pointing to HOST1 and HOST2 (leader and replica)
and shard2 was
Trey Grainger's presentation may be relevant here:
http://www.lucenerevolution.org/2013/Building-a-Real-time-Big-Data-Analytics-Platform-with-Solr
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of
It looks like partial search works only with copied to field. This works:
$ curl
http://localhost:8282/solr/links/select?q=text_ngrams:yengaswt=jsonindent=onfl=id,domain,score;
On Tue, June 25, 2013 12:39 am, Mugoma Joseph O. wrote:
Hello,
I am newbie to solr.
I am trying out partial
Vinay,
You may wish to pay attention to how many transaction logs are being created
along the way to your hard autoCommit, which should truncate the open handles
for those files. I might suggest setting a maxDocs value in parallel with your
maxTime value (you can use both) to ensure the
Jason,
Regarding your statement push you over the edge- what does that mean?
Does it mean uncharted territory with unknown ramifications or something
more like specific, known symptoms?
I ask because our use is similar to Vinay's in some respects, and we want
to be able to push the capabilities
The queue size is high, but I doubt that's your issue. Here's what
I'd do:
1 check the Solr server. Is it CPU bound? I/O bound? You have to
identify where the resources are being spent before you get to
implementing a solution.
Are you using SolrCloud? If so, not committing until all 100M docs is
Scott,
My comment was meant to be a bit tongue-in-cheek, but my intent in the
statement was to represent hard failure along the lines Vinay is seeing. We're
talking about OutOfMemoryException conditions, total cluster paralysis
requiring restart, or other similar and disastrous conditions.
The UI will still show shard1 after the split for two reasons:
1. The UI is not aware of shard states yet so even though shard1 is
inactive and won't process any requests, it shows up as green in the
Admin UI
2. A shard is *not* deleted automatically after a split so if you want
it to be thrown
Yes. A lot of people are using it in front of SOLR to speed up and reduce
the Garbase Collections.
On Fri, Jun 21, 2013 at 9:44 AM, Jack Park jackp...@topicquests.org wrote:
I presume you mean https://www.varnish-cache.org/
That's the first I'd heard of it.
Thanks
Jack
On Thu, Jun 20,
I agree. It is even slower when the slave is being pounded.
On Fri, Jun 21, 2013 at 3:35 AM, Ted zhanghailian...@qq.com wrote:
Solr replication is extremely slow(less then 1MB/s)
When the replication is runinng,network and disk occupancy rate remained at
a very low level.
I've tried
It goes restart the MMap stuff though.
On Fri, Jun 21, 2013 at 12:26 PM, Michael Ryan mr...@moreover.com wrote:
Restarting Solr won't clear the disk cache. When I'm doing perf testing,
I'll sometimes run this on the server before each test to clear out the
disk cache:
echo 1
Shalin,
There's one point to test without caches, which is to establish how much value
a cache actually provides.
For me, this primarily means providing a benchmark by which to decide when to
stop obsessing over caches.
But yes, for load testing I definitely agree :)
Jason
On Jun 21,
Yeah, I was talking about load testing. Tuning caches by looking at
evictions and hit ratio is what's useful.
On Tue, Jun 25, 2013 at 11:08 AM, Jason Hellman
jhell...@innoventsolutions.com wrote:
Shalin,
There's one point to test without caches, which is to establish how much
value a cache
Thanks.
On Mon, Jun 24, 2013 at 5:52 PM, Jack Krupansky j...@basetechnology.comwrote:
The general idea is that tokenization can generally be done in a
language-independent manner, but stemming, synonyms, stop words, etc. must
be done in a language-dependent manner.
So, yes, text_en is a
Some tokenizers are language-specific -- Japanese, Chinese, Thai, probably
Arabic, and to some degree, Portuguese.
For English, stemming does not help that much. On general text, it might be a
5% improvement. For proper nouns (movie and book titles, product names), it
probably doesn't help and
92 matches
Mail list logo