lxs360 opened a new issue, #4739:
URL: https://github.com/apache/iceberg/issues/4739

   SLF4J: Defaulting to no-operation (NOP) logger implementation
   SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
   Exception in thread "main" 
software.amazon.awssdk.core.exception.SdkClientException: Unable to load region 
from any of the providers in the chain 
software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain@5503de1: 
[software.amazon.awssdk.regions.providers.SystemSettingsRegionProvider@32a2a6be:
 Unable to load region from system settings. Region must be specified either 
via environment variable (AWS_REGION) or  system property (aws.region)., 
software.amazon.awssdk.regions.providers.AwsProfileRegionProvider@1d4fb213: No 
region provided in profile: default, 
software.amazon.awssdk.regions.providers.InstanceProfileRegionProvider@67bf0480:
 Unable to contact EC2 metadata service.]
        at 
software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:98)
        at 
software.amazon.awssdk.regions.providers.AwsRegionProviderChain.getRegion(AwsRegionProviderChain.java:70)
        at 
software.amazon.awssdk.awscore.client.builder.AwsDefaultClientBuilder.regionFromDefaultProvider(AwsDefaultClientBuilder.java:202)
        at 
software.amazon.awssdk.awscore.client.builder.AwsDefaultClientBuilder.resolveRegion(AwsDefaultClientBuilder.java:184)
   
   我的代码:
     val  prop = System.getProperties();
   //    prop.put("aws.region", "cn-north-1")
   //    prop.put("aws.accessKeyId", "sadsasadggdsf")
   //    prop.put("aws.secretAccessKey", "fgfhetrsdaasdsa")
   //    prop.put("aws.endpoint", "http://ip:7580";)
   //    System.setProperties(prop)
       val 
spark:SparkSession=SparkSession.builder().master("local[*]").appName("iceberg")
         .config("spark.sql.catalog.my_catalog" 
,"org.apache.iceberg.spark.SparkCatalog")
         .config("spark.sql.catalog.my_catalog.warehouse" ,"s3://buk/test")
         .config("spark.sql.catalog.my_catalog.catalog-impl" 
,"org.apache.iceberg.aws.glue.GlueCatalog")
         
.config("spark.sql.catalog.my_catalog.io-impl","org.apache.iceberg.aws.s3.S3FileIO")
         
.config("spark.sql.catalog.my_catalog.lock-impl","org.apache.iceberg.aws.glue.DynamoLockManager")
         .config("spark.sql.catalog.my_catalog.lock.table", "myGlueLockTable")
         .config("spark.sql.sources.partitionOverwriteMode", 
"dynamic").getOrCreate()
   
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", 
"http://ip:7580";)
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3AFileSystem")
       
spark.sparkContext.hadoopConfiguration.set("spark.hadoop.fs.s3a.aws.credentials.provider",
 "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")
       
spark.sparkContext.hadoopConfiguration.set("spark.hadoop.fs.s3a.path.style.access",
 "true")
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", 
"sadsasadggdsf")
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", 
"fgfhetrsdaasdsa")
       
spark.sparkContext.hadoopConfiguration.set("fs.s3a.session.token","sessionToken")
   
   
   我的问题是:我使用的s3是自己搭的私有云,认证信息只有 endpoint、 key、桶,没有region,怎么跳过region呢
   
   如果使用如下设置;
   //    prop.put("aws.region", "cn-north-1")
   //    prop.put("aws.accessKeyId", "sadsasadggdsf")
   //    prop.put("aws.secretAccessKey", "fgfhetrsdaasdsa")
   //    prop.put("aws.endpoint", "http://ip:7580";)
   
   就去AWS找公有云了 ,当然,我的key根本不存在于AWS的s3
   
   如何解决呢???


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to