GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17699
[SPARK-20405][SQL] Dataset.withNewExecutionId should be private
## What changes were proposed in this pull request?
Dataset.withNewExecutionId is only used in Dataset itself and should be
private
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17698#discussion_r112383091
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -1036,3 +1036,8 @@ case class UpCast(child: Expression
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112382152
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ArrowConvertersSuite.scala ---
@@ -0,0 +1,568 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112381608
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/ArrowConvertersSuite.scala ---
@@ -0,0 +1,568 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112376143
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112376037
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112375921
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112375496
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112373858
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112373805
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112370906
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112370321
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112368956
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112368367
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ArrowConverters.scala ---
@@ -0,0 +1,432 @@
+/*
+* Licensed to the Apache Software Foundation
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112365872
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1635,21 +1636,49 @@ def toDF(self, *cols):
return DataFrame(jdf, self.sql_ctx
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112365773
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1635,21 +1636,49 @@ def toDF(self, *cols):
return DataFrame(jdf, self.sql_ctx
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15821#discussion_r112365501
--- Diff: python/pyspark/serializers.py ---
@@ -182,6 +182,23 @@ def loads(self, obj):
raise NotImplementedError
+class
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15821
@BryanCutler Are you going to update this for arrow 0.3?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15821
Please move ArrowConverters.scala somewhere else that's not top level, e.g.
execution.arrow
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17678
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17678
Is there a codegen version we need to worry about?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17690
Thanks - merging in master/branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17692
Merging in master/branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17648
Can we just do a logical rewrite to turn them into "condA + condB + condC >
0" (for Some/Any) ?
---
If your project is set up for it, you can reply to this email and have your
reply app
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17657
Merging in master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17661
Merging in branch-2.1. Can you close your PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17664
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15398
I pushed a commit. Hopefully that fixes it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15398
I've resolved the conflict and merged this in master/branch-2.1. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17630
Thanks for the explanation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17630
Wait - are we storing UTF8Strings directly in the catalog for statistics?
That doesn't make sense ... if we are not, then we are not using internal
types. In that case we should document clearly
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17633
Then it should work.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17633
Does this work for non-Hive tables?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17623#discussion_r111505420
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -149,7 +149,7 @@ case class Cast(child: Expression
Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/17196
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17630
When we update Spark and change the internal format, we'd still need to
keep the current implementation.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17630
hm this means we will forever need to be able to read the internal format,
doesn't it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17614
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17616
[SPARK-20304][SQL] AssertNotNull should not include path in string
representation
## What changes were proposed in this pull request?
AssertNotNull's toString/simpleString dumps the entire
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17616
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17614#discussion_r111064001
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DataType.scala ---
@@ -288,4 +288,30 @@ object DataType {
case (fromDataType
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17604
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17604
[SPARK-20289][SQL] Use StaticInvoke to box primitive types
## What changes were proposed in this pull request?
Dataset typed API currently uses NewInstance to box primitive types (i.e.
calling
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17599
Merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17599
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17596
BTW a potential, better way to solve this is to combine all the metrics
into a single accumulator.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17596#discussion_r110765367
--- Diff:
core/src/main/scala/org/apache/spark/util/InternalLongAccumulator.scala ---
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17595
Merging this since as long as it compiles the change should be fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17595
[SPARK-20283][SQL] Add preOptimizationBatches
## What changes were proposed in this pull request?
We currently have postHocOptimizationBatches, but not
preOptimizationBatches. This patch adds
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17592
Should this go into branch-2.1 as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17574
Meh let's not bother. There isn't any harm in the current setup since it's
already a transitive dependency. Why waste time on those?
---
If your project is set up for it, you can reply to this email
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17574
[SPARK-20264][SQL] asm should be non-test dependency in sql/core
## What changes were proposed in this pull request?
sq/core module currently declares asm as a test scope dependency.
Transitively
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17573
[SPARK-20262][SQL] AssertNotNull should throw NullPointerException
## What changes were proposed in this pull request?
AssertNotNull currently throws RuntimeException. It should throw
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17570
Merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17570
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17570
Jenkins, add to whitelist.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17555
[SPARK-19495][SQL] Make SQLConf slightly more extensible - addendum
## What changes were proposed in this pull request?
This is a tiny addendum to SPARK-19495 to remove the private visibility
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17554
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17541#discussion_r110013198
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/broadcastMode.scala
---
@@ -26,10 +26,7 @@ import
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17471
cc @cloud-fan / @ueshin / @sameeragarwal can you review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17521
@nsyca can you look into it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17521
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17522
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17521
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17499
Great - please close this. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17487
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17505#discussion_r109553390
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -242,6 +251,16 @@ private[client] class Shim_v0_12 extends Shim
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17499
Maybe Hive can do it in Hive.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17521
To be clear, I don't think we should have two separate places to define
config entries. If this is what the pr is doing, I strongly veto.
---
If your project is set up for it, you can reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17522
Seems fine to me, since the number of external resource managers are small.
We should definitely make it clear there is no firm commitment currently to
merge this into Spark though.
---
If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17518
Is this an API change or just a documentation change? The title suggests
you are changing public facing APIs?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17476
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17490
I don't think the change makes sense ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17476#discussion_r109092194
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileIndex.scala
---
@@ -72,4 +72,14 @@ trait FileIndex {
/** Schema
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17476#discussion_r109092246
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/CatalogFileIndex.scala
---
@@ -111,7 +113,8 @@ private class
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17476
cc @ericl, @bogdanrdc, @adrian-ionescu, @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17476
[SPARK-20151][SQL] Account for partition pruning in scan metadataTime
metrics
## What changes were proposed in this pull request?
After SPARK-20136, we report metadata timing metrics in scan
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17475
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17465
Let me merge this now. I will send a follow-up PR to take the logical
planning time into account (otherwise in the vast majority of cases, i.e.
pruned partitions, the metadata operation time
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17465
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17470
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17475
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17465
cc @ericl, @bogdanrdc, @adrian-ionescu, @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17465
[SPARK-20136][SQL] Add num files and metadata operation timing to scan
operator metrics
## What changes were proposed in this pull request?
This patch adds explicit metadata operation timing
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17464
Merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17464#discussion_r108600240
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLListenerSuite.scala
---
@@ -477,9 +477,11 @@ private case class MyPlan(sc
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/17464
[SPARK-20134][SQL] SQLMetrics.postDriverMetricUpdates to simplify driver
side metric updates
## What changes were proposed in this pull request?
It is not super intuitive how to update SQLMetric
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17424
Hm - so this would require us to update the test suite every time there is
an update to the docs?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17420
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17399
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17186
Merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17399
@roxannemoslehi can you fix the title? We can then merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17399
Yea we definitely need a better title. Thanks for contributing though.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17397
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17396
Merging in master. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17312
That would be pretty confusing wouldn't it? The table has 3 entries and the
title says only 2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17312
Your screenshot had 3 executors. Why does it say 2?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17359
Why do we want this? Seems extremely low usage on this function in the wild.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17380
Thanks - merging in master/branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17343
Can you add some documentation inline so in the future we'd know why
specific implementations were chosen?
---
If your project is set up for it, you can reply to this email and have your
reply
701 - 800 of 14826 matches
Mail list logo