[ 
https://issues.apache.org/jira/browse/SPARK-39804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17610696#comment-17610696
 ] 

cornel creanga commented on SPARK-39804:
----------------------------------------

Spark will only load the default file when no custom configuration was declared 
(it check to see if log4j2 is using the 

org.apache.logging.log4j.core.config.DefaultConfiguration)

If you need an example how to declare a configuration programmatically you can 
take a look on my git repo 
[here|[http://example.com|https://github.com/cornelcreanga/spark-playground/blob/master/examples/src/main/scala/com/creanga/playground/spark/example/logging/CustomConfigurationFactory.java]].
 

> Override Spark Core_2.12 (v3.3.0) logging configuration
> -------------------------------------------------------
>
>                 Key: SPARK-39804
>                 URL: https://issues.apache.org/jira/browse/SPARK-39804
>             Project: Spark
>          Issue Type: Question
>          Components: Spark Core
>    Affects Versions: 3.3.0
>            Reporter: Jitin Dominic
>            Priority: Major
>
> I'm using Grails 2.5.4 and trying to use _SparkSession_ instance for 
> generating a Parquet output. Recently, upgraded the spark core and it's 
> related dependencies to their latest version(v3.3.0).
>  
> During the SparkSession builder() initialization, I notice that some extra 
> logs are getting displayed:
>  
> {noformat}
> Using Spark's default log4j profile: 
> org/apache/spark/log4j2-defaults.properties
> 22/07/13 11:58:54 WARN Utils: Your hostname, XY resolves to a loopback 
> address: 127.0.1.1; using 1XX.1XX.0.1XX instead (on interface wlo1)
> 22/07/13 11:58:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 22/07/13 11:58:54 INFO SparkContext: Running Spark version 3.3.0
> 22/07/13 11:58:54 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 22/07/13 11:58:54 INFO ResourceUtils: 
> ==============================================================
> 22/07/13 11:58:54 INFO ResourceUtils: No custom resources configured for 
> spark.driver.
> 22/07/13 11:58:54 INFO ResourceUtils: 
> ==============================================================
> 22/07/13 11:58:54 INFO SparkContext: Submitted application: ABCDE
> 22/07/13 11:58:54 INFO ResourceProfile: Default ResourceProfile created, 
> executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , 
> memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: 
> offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: 
> cpus, amount: 1.0)
> 22/07/13 11:58:54 INFO ResourceProfile: Limiting resource is cpu
> 22/07/13 11:58:54 INFO ResourceProfileManager: Added ResourceProfile id: 0
> 22/07/13 11:58:54 INFO SecurityManager: Changing view acls to: xy
> 22/07/13 11:58:54 INFO SecurityManager: Changing modify acls to: xy
> 22/07/13 11:58:54 INFO SecurityManager: Changing view acls groups to: 
> 22/07/13 11:58:54 INFO SecurityManager: Changing modify acls groups to: 
> 22/07/13 11:58:54 INFO SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users  with view permissions: Set(xy); groups 
> with view permissions: Set(); users  with modify permissions: Set(xy); groups 
> with modify permissions: Set()
> 22/07/13 11:58:54 INFO Utils: Successfully started service 'sparkDriver' on 
> port 39483.
> 22/07/13 11:58:54 INFO SparkEnv: Registering MapOutputTracker
> 22/07/13 11:58:54 INFO SparkEnv: Registering BlockManagerMaster
> 22/07/13 11:58:54 INFO BlockManagerMasterEndpoint: Using 
> org.apache.spark.storage.DefaultTopologyMapper for getting topology 
> information
> 22/07/13 11:58:54 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint 
> up
> 22/07/13 11:58:54 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
> 22/07/13 11:58:55 INFO DiskBlockManager: Created local directory at 
> /tmp/blockmgr-cf39a58e-e5bc-4a26-b92a-d945a0deb8e7
> 22/07/13 11:58:55 INFO MemoryStore: MemoryStore started with capacity 2004.6 
> MiB
> 22/07/13 11:58:55 INFO SparkEnv: Registering OutputCommitCoordinator
> 22/07/13 11:58:55 INFO Utils: Successfully started service 'SparkUI' on port 
> 4040.
> 22/07/13 11:58:55 INFO Executor: Starting executor ID driver on host 
> 1XX.1XX.0.1XX
> 22/07/13 11:58:55 INFO Executor: Starting executor with user classpath 
> (userClassPathFirst = false): ''
> 22/07/13 11:58:55 INFO Utils: Successfully started service 
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33993.
> 22/07/13 11:58:55 INFO NettyBlockTransferService: Server created on 
> 192.168.0.135:33993
> 22/07/13 11:58:55 INFO BlockManager: Using 
> org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
> policy
> 22/07/13 11:58:55 INFO BlockManagerMaster: Registering BlockManager 
> BlockManagerId(driver, 192.168.0.135, 33993, None)
> 22/07/13 11:58:55 INFO BlockManagerMasterEndpoint: Registering block manager 
> 192.168.0.135:33993 with 2004.6 MiB RAM, BlockManagerId(driver, 
> 192.168.0.135, 33993, None)
> 22/07/13 11:58:55 INFO BlockManagerMaster: Registered BlockManager 
> BlockManagerId(driver, 192.168.0.135, 33993, None)
> 22/07/13 11:58:55 INFO BlockManager: Initialized BlockManager: 
> BlockManagerId(driver, 192.168.0.135, 33993, None){noformat}
>  
> Before initializing the SparkSession instance using the builder() method, 
> I've configured the logger level programmatically by:
> {code:java}
> Configurator.setLevel("org", Level.ERROR)
> Configurator.setLevel("org.apache.spark", Level.ERROR)
> Configurator.setLevel("akka", Level.ERROR)
> Configurator.setLevel("scala", Level.ERROR)
> Configurator.setLevel("java", Level.ERROR)
> Configurator.setLevel("org.slf4j", Level.ERROR)
> Configurator.setLevel("com", Level.ERROR)
> Configurator.setLevel("javax", Level.ERROR)
> Configurator.setLevel("jakarta", Level.ERROR)
> Configurator.setLevel("io", Level.ERROR)
> Configurator.setLevel("net", Level.ERROR)
>  {code}
> I notice that it's picking the default _log4j2.properties_ file of Spark. Is 
> there a way I can override the logging configuration programatically or 
> disable this default logging so that these extra logs don't appear ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to