[ 
https://issues.apache.org/jira/browse/HADOOP-18344?focusedWorklogId=795668&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795668
 ]

ASF GitHub Bot logged work on HADOOP-18344:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Jul/22 14:11
            Start Date: 27/Jul/22 14:11
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on PR #4637:
URL: https://github.com/apache/hadoop/pull/4637#issuecomment-1196813978

   CSE-KMS mode failing, but as i've not tried it before, can't blame the SDK
   ```
   time bin/hadoop fs -copyFromLocal -t 10  
share/hadoop/tools/lib/hadoop-azure-3.4.0-SNAPSHOT.jar $BUCKET/
   2022-07-27 15:10:11,422 [main] WARN  s3a.S3AFileSystem 
(S3AFileSystem.java:createRequestFactory(1004)) - Unknown storage class 
property fs.s3a.create.storage.class: ; falling back to default storage class
   2022-07-27 15:10:11,999 [main] WARN  s3.AmazonS3EncryptionClientV2 
(AmazonS3EncryptionClientV2.java:warnOnLegacyCryptoMode(409)) - The S3 
Encryption Client is configured to read encrypted data with legacy encryption 
modes through the CryptoMode setting. If you don't have objects encrypted with 
these legacy modes, you should disable support for them to enhance security. 
See https://docs.aws.amazon.com/general/latest/gr/aws_sdk_cryptography.html
   2022-07-27 15:10:11,999 [main] WARN  s3.AmazonS3EncryptionClientV2 
(AmazonS3EncryptionClientV2.java:warnOnRangeGetsEnabled(401)) - The S3 
Encryption Client is configured to support range get requests. Range gets do 
not provide authenticated encryption properties even when used with an 
authenticated mode (AES-GCM). See 
https://docs.aws.amazon.com/general/latest/gr/aws_sdk_cryptography.html
   2022-07-27 15:10:12,000 [main] INFO  s3a.DefaultS3ClientFactory 
(LogExactlyOnce.java:info(44)) - S3 client-side encryption enabled: Ignore 
S3-CSE Warnings.
   2022-07-27 15:10:12,010 [main] INFO  impl.DirectoryPolicyImpl 
(DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be 
kept
   2022-07-27 15:10:12,477 [main] DEBUG shell.Command 
(Command.java:displayError(476)) - copyFromLocal failure
   org.apache.hadoop.fs.PathExistsException: 
`s3a://stevel-london/hadoop-azure-3.4.0-SNAPSHOT.jar': File exists
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:421)
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:362)
           at 
org.apache.hadoop.fs.shell.CopyCommandWithMultiThread.copyFileToTarget(CopyCommandWithMultiThread.java:144)
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:293)
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:278)
           at 
org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:382)
           at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:345)
           at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:318)
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:273)
           at 
org.apache.hadoop.fs.shell.Command.processArgument(Command.java:300)
           at 
org.apache.hadoop.fs.shell.Command.processArguments(Command.java:284)
           at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:244)
           at 
org.apache.hadoop.fs.shell.CopyCommandWithMultiThread.processArguments(CopyCommandWithMultiThread.java:89)
           at 
org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:313)
           at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
           at org.apache.hadoop.fs.shell.Command.run(Command.java:191)
           at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
           at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
           at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)
           at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
   copyFromLocal: `s3a://stevel-london/hadoop-azure-3.4.0-SNAPSHOT.jar': File 
exists
   2022-07-27 15:10:12,482 [shutdown-hook-0] INFO  
statistics.IOStatisticsLogging 
(IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: 
counters=((action_http_head_request=1)
   (audit_request_execution=1)
   (audit_span_creation=4)
   (object_metadata_request=1)
   (op_get_file_status=2)
   (op_glob_status=1)
   (store_io_request=2));
   
   gauges=((client_side_encryption_enabled=1));
   
   minimums=((action_http_head_request.min=433)
   (op_get_file_status.min=1)
   (op_glob_status.min=4));
   
   maximums=((action_http_head_request.max=433)
   (op_get_file_status.max=436)
   (op_glob_status.max=4));
   
   means=((action_http_head_request.mean=(samples=1, sum=433, mean=433.0000))
   (op_get_file_status.mean=(samples=2, sum=437, mean=218.5000))
   (op_glob_status.mean=(samples=1, sum=4, mean=4.0000)));
   
   
   ________________________________________________________
   Executed in    1.72 secs    fish           external
      usr time    3.35 secs    0.08 millis    3.35 secs
      sys time    0.16 secs    1.49 millis    0.16 secs
   
   ```
   




Issue Time Tracking
-------------------

    Worklog Id:     (was: 795668)
    Time Spent: 1.5h  (was: 1h 20m)

> AWS SDK update to 1.12.262 to address jackson  CVE-2018-7489
> ------------------------------------------------------------
>
>                 Key: HADOOP-18344
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18344
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0, 3.3.4
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
>  yet another jackson CVE in aws sdk
> https://github.com/apache/hadoop/pull/4491/commits/5496816b472473eb7a9c174b7d3e69b6eee1e271
> maybe we need to have a list of all shaded jackson's we get on the CP and 
> have a process of upgrading them all at the same time



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to