5.2 release branch is created. Please feel free to build the artifacts for
the local testing purpose.

We should likely have 5.2.0 release in about a week’s time.


On Mon, Feb 19, 2024 at 9:40 PM Istvan Toth <st...@cloudera.com.invalid>
wrote:

> While that may be true for Jackson, it generally is not true for all
> components.
> Replacing dependencies is sometimes really as simple as a version update,
> and sometimes requires extensive code modifications, or re-vamping the
> dependencies.
>
> AFAICT the current de facto policy of the Apache HBase community is not
> trying to replace Hadoop-only dependencies,
> and the policy of the Apache Phoenix community is to not trying to replace
> Hadoop-only or HBase-only dependencies.
>
> IMO the best solution is to use the new artifact added in PHOENIX-7139
> <https://issues.apache.org/jira/browse/PHOENIX-7139> with the
> hbase-mapreduce (or hbase-shaded-byo-hadoop) JARs,
> which both omit Hadoop, and can use the patched Hadoop provided by Trino.
>
> According to my preliminary tests HBase 2.5 built for Hadoop 9.2.3 seems to
> work with the Hadoop 3.3.x libraries, but the same is not
> true for HBase 2.4 built with Hadoop 3.1.2.
>
> best regards
> Istvan
>
> On Mon, Feb 19, 2024 at 7:26 PM Mateusz Gajewski <
> mateusz.gajew...@starburstdata.com> wrote:
>
> > In Trino we have our own patched Hadoop library (3.3.5 based) but we are
> > slowly removing dependencies on Hadoop from the codebase (it's pretty
> > isolated already).
> >
> > As for the HBase - if Phoenix is shading HBase, for the end user (like
> > Trino) the CVEs are coming from Phoenix, not HBase. Can you exclude
> > transitive dependencies and provide your own instead? I.e. Jackson is
> > almost in every case a drop-in replacement for the older version.
> >
> > On Mon, Feb 19, 2024 at 16:39 Istvan Toth <st...@cloudera.com.invalid>
> > wrote:
> >
> > > Thanks, Mateusz.
> > >
> > > The vast majority of these is coming from either HBase or Hadoop.
> > > (We always do a CVE pass on the direct Phoenix dependencies before
> > release)
> > >
> > > Unfortunately, Hadoop is generally not binary compatible between minor
> > > releases, so using a newer Hadoop minor release than the default used
> by
> > > HBase is not always an option.
> > >
> > > We definitely will update Hadoop to 3.2.4 in the HBase 2.5 profile in
> > 5.1,
> > > but we are still testing if Hadoop 3.2 works with the HBase 2.4.
> profile
> > > (which builds with Hadoop 3.1.3 now).
> > >
> > > Depending on how the release schedules align, either 5.2 or 5.2.1 is
> > going
> > > to support HBase 2.6, which is built by Hadoop 3.3 by default, so that
> > > should also help.
> > >
> > > 5.2 is also going to have a new shaded artifact, which works with the
> > > hbase-shaded-mapreduce jars, and as such will include neither Hadoop
> nor
> > > HBase libraries.
> > > I think that moving to that one will be the best solution for Trino, as
> > it
> > > can independently manage the Hadoop and HBase versions used then.
> > > (It also solves the incompatibility between the standard HBase
> libraries
> > > and Phoenix)
> > > See https://issues.apache.org/jira/browse/PHOENIX-7139 .
> > >
> > > best regards
> > > Istvan
> > >
> > >
> > >
> > > On Mon, Feb 19, 2024 at 11:13 AM Mateusz Gajewski <
> > > mateusz.gajew...@starburstdata.com> wrote:
> > >
> > > > Rendered:
> > > > https://github.com/trinodb/trino/pull/20739#issuecomment-1952114587
> > > >
> > > >
> > > > On Mon, Feb 19, 2024 at 10:43 AM Mateusz Gajewski <
> > > > mateusz.gajew...@starburstdata.com> wrote:
> > > >
> > > > > Yeah, attachment was sent but not delivered.
> > > > >
> > > > > Inline version
> > > > >
> > > > > "avro" "1.7.7" "java-archive" "CVE-2023-39410" "High" "When
> > > deserializing
> > > > > untrusted or corrupted data, it is possible for a reader to consume
> > > > memory
> > > > > beyond the allowed constraints and thus lead to out of memory on
> the
> > > > > system. This issue affects Java applications using Apache Avro Java
> > SDK
> > > > up
> > > > > to and including 1.11.2. Users should update to apache-avro version
> > > > 1.11.3
> > > > > which addresses this issue. " "fixed" "[1.11.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "commons-net" "3.6" "java-archive" "CVE-2021-37533" "Medium" "Prior
> > to
> > > > > Apache Commons Net 3.9.0, Net's FTP client trusts the host from
> PASV
> > > > > response by default. A malicious server can redirect the Commons
> Net
> > > code
> > > > > to use a different host, but the user has to connect to the
> malicious
> > > > > server in the first place. This may lead to leakage of information
> > > about
> > > > > services running on the private network of the client. The default
> in
> > > > > version 3.9.0 is now false to ignore such hosts, as cURL does. See
> > > > > https://issues.apache.org/jira/browse/NET-711."; "fixed" "[3.9.0]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "hadoop-common" "3.1.4" "java-archive" "CVE-2022-26612" "Critical"
> > "In
> > > > > Apache Hadoop, The unTar function uses unTarUsingJava function on
> > > Windows
> > > > > and the built-in tar utility on Unix and other OSes. As a result, a
> > TAR
> > > > > entry may create a symlink under the expected extraction directory
> > > which
> > > > > points to an external directory. A subsequent TAR entry may extract
> > an
> > > > > arbitrary file into the external directory using the symlink name.
> > This
> > > > > however would be caught by the same targetDirPath check on Unix
> > because
> > > > of
> > > > > the getCanonicalPath call. However on Windows, getCanonicalPath
> > doesn't
> > > > > resolve symbolic links, which bypasses the check. unpackEntries
> > during
> > > > TAR
> > > > > extraction follows symbolic links which allows writing outside
> > expected
> > > > > base directory on Windows. This was addressed in Apache Hadoop
> 3.2.3"
> > > > > "fixed" "[3.2.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "hadoop-common" "3.1.4" "java-archive" "CVE-2022-25168" "Critical"
> > > > "Apache
> > > > > Hadoop's FileUtil.unTar(File, File) API does not escape the input
> > file
> > > > name
> > > > > before being passed to the shell. An attacker can inject arbitrary
> > > > > commands. This is only used in Hadoop 3.3
> > > > > InMemoryAliasMap.completeBootstrapTransfer, which is only ever run
> > by a
> > > > > local user. It has been used in Hadoop 2.x for yarn localization,
> > which
> > > > > does enable remote code execution. It is used in Apache Spark, from
> > the
> > > > SQL
> > > > > command ADD ARCHIVE. As the ADD ARCHIVE command adds new binaries
> to
> > > the
> > > > > classpath, being able to execute shell scripts does not confer new
> > > > > permissions to the caller. SPARK-38305. "Check existence of file
> > before
> > > > > untarring/zipping", which is included in 3.3.0, 3.1.4, 3.2.2,
> > prevents
> > > > > shell commands being executed, regardless of which version of the
> > > hadoop
> > > > > libraries are in use. Users should upgrade to Apache Hadoop 2.10.2,
> > > > 3.2.4,
> > > > > 3.3.3 or upper (including HADOOP-18136)." "fixed" "[3.2.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "hadoop-common" "3.1.4" "java-archive" "CVE-2021-37404" "Critical"
> > > "There
> > > > > is a potential heap buffer overflow in Apache Hadoop libhdfs native
> > > code.
> > > > > Opening a file path provided by user without validation may result
> > in a
> > > > > denial of service or arbitrary code execution. Users should upgrade
> > to
> > > > > Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher." "fixed" "[3.2.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "httpclient" "4.5.2" "java-archive" "CVE-2020-13956" "Medium"
> "Apache
> > > > > HttpClient versions prior to version 4.5.13 and 5.0.3 can
> > misinterpret
> > > > > malformed authority component in request URIs passed to the library
> > as
> > > > > java.net.URI object and pick the wrong target host for request
> > > > execution."
> > > > > "fixed" "[4.5.13]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-9548"
> "Critical"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > br.com.anteros.dbcp.AnterosDBCPConfig (aka anteros-core)." "fixed"
> > > > > "[2.7.9.7]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-9547"
> "Critical"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka
> > > > > ibatis-sqlmap)." "fixed" "[2.7.9.7]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-8840"
> "Critical"
> > > > > "FasterXML jackson-databind 2.0.0 through 2.9.10.2 lacks certain
> > > > > xbean-reflect/JNDI blocking, as demonstrated by
> > > > > org.apache.xbean.propertyeditor.JndiConverter." "fixed" "[2.6.7.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-20330"
> "Critical"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain
> > > > > net.sf.ehcache blocking." "fixed" "[2.6.7.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-17531"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > 2.0.0
> > > > > through 2.9.10. When Default Typing is enabled (either globally or
> > for
> > > a
> > > > > specific property) for an externally exposed JSON endpoint and the
> > > > service
> > > > > has the apache-log4j-extra (version 1.2.x) jar in the classpath,
> and
> > an
> > > > > attacker can provide a JNDI service to access, it is possible to
> make
> > > the
> > > > > service execute a malicious payload." "fixed" "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-17267"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > > before 2.9.10. It is related to
> > > > > net.sf.ehcache.hibernate.EhcacheJtaTransactionManagerLookup."
> "fixed"
> > > > > "[2.8.11.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-16943"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > 2.0.0
> > > > > through 2.9.10. When Default Typing is enabled (either globally or
> > for
> > > a
> > > > > specific property) for an externally exposed JSON endpoint and the
> > > > service
> > > > > has the p6spy (3.8.6) jar in the classpath, and an attacker can
> find
> > an
> > > > RMI
> > > > > service endpoint to access, it is possible to make the service
> > execute
> > > a
> > > > > malicious payload. This issue exists because of
> > > > > com.p6spy.engine.spy.P6DataSource mishandling." "fixed" "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-16942"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > 2.0.0
> > > > > through 2.9.10. When Default Typing is enabled (either globally or
> > for
> > > a
> > > > > specific property) for an externally exposed JSON endpoint and the
> > > > service
> > > > > has the commons-dbcp (1.4) jar in the classpath, and an attacker
> can
> > > find
> > > > > an RMI service endpoint to access, it is possible to make the
> service
> > > > > execute a malicious payload. This issue exists because of
> > > > > org.apache.commons.dbcp.datasources.SharedPoolDataSource and
> > > > > org.apache.commons.dbcp.datasources.PerUserPoolDataSource
> > mishandling."
> > > > > "fixed" "[2.9.10.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-16335"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > > before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource.
> > > This
> > > > is
> > > > > a different vulnerability than CVE-2019-14540." "fixed" "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-14892"
> "Critical"
> > > "A
> > > > > flaw was discovered in jackson-databind in versions before 2.9.10,
> > > > 2.8.11.5
> > > > > and 2.6.7.3, where it would permit polymorphic deserialization of a
> > > > > malicious object using commons-configuration 1 and 2 JNDI classes.
> An
> > > > > attacker could use this flaw to execute arbitrary code." "fixed"
> > > > > "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-14540"
> "Critical"
> > > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > > > before 2.9.10. It is related to com.zaxxer.hikari.HikariConfig."
> > > "fixed"
> > > > > "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-14379"
> "Critical"
> > > > > "SubTypeValidator.java in FasterXML jackson-databind before 2.9.9.2
> > > > > mishandles default typing when ehcache is used (because of
> > > > >
> net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup),
> > > > > leading to remote code execution." "fixed" "[2.7.9.6]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-7489"
> "Critical"
> > > > > "FasterXML jackson-databind before 2.7.9.3, 2.8.x before 2.8.11.1
> and
> > > > 2.9.x
> > > > > before 2.9.5 allows unauthenticated remote code execution because
> of
> > an
> > > > > incomplete fix for the CVE-2017-7525 deserialization flaw. This is
> > > > > exploitable by sending maliciously crafted JSON input to the
> > readValue
> > > > > method of the ObjectMapper, bypassing a blacklist that is
> ineffective
> > > if
> > > > > the c3p0 libraries are available in the classpath." "fixed"
> > > "[2.8.11.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-14719"
> "Critical"
> > > > > "FasterXML jackson-databind 2.x before 2.9.7 might allow remote
> > > attackers
> > > > > to execute arbitrary code by leveraging failure to block the
> > > blaze-ds-opt
> > > > > and blaze-ds-core classes from polymorphic deserialization."
> "fixed"
> > > > > "[2.7.9.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-14718"
> "Critical"
> > > > > "FasterXML jackson-databind 2.x before 2.9.7 might allow remote
> > > attackers
> > > > > to execute arbitrary code by leveraging failure to block the
> > slf4j-ext
> > > > > class from polymorphic deserialization." "fixed" "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-11307"
> "Critical"
> > > "An
> > > > > issue was discovered in FasterXML jackson-databind 2.0.0 through
> > 2.9.5.
> > > > Use
> > > > > of Jackson default typing along with a gadget class from iBatis
> > allows
> > > > > exfiltration of content. Fixed in 2.7.9.4, 2.8.11.2, and 2.9.6."
> > > "fixed"
> > > > > "[2.7.9.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2017-7525"
> "Critical"
> > "A
> > > > > deserialization flaw was discovered in the jackson-databind,
> versions
> > > > > before 2.6.7.1, 2.7.9.1 and 2.8.9, which could allow an
> > unauthenticated
> > > > > user to perform code execution by sending the maliciously crafted
> > input
> > > > to
> > > > > the readValue method of the ObjectMapper." "fixed" "[2.6.7.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2017-17485"
> "Critical"
> > > > > "FasterXML jackson-databind through 2.8.10 and 2.9.x through 2.9.3
> > > allows
> > > > > unauthenticated remote code execution because of an incomplete fix
> > for
> > > > the
> > > > > CVE-2017-7525 deserialization flaw. This is exploitable by sending
> > > > > maliciously crafted JSON input to the readValue method of the
> > > > ObjectMapper,
> > > > > bypassing a blacklist that is ineffective if the Spring libraries
> are
> > > > > available in the classpath." "fixed" "[2.8.11]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2022-42004" "High"
> "In
> > > > > FasterXML jackson-databind before 2.13.4, resource exhaustion can
> > occur
> > > > > because of a lack of a check in
> > BeanDeserializer._deserializeFromArray
> > > to
> > > > > prevent use of deeply nested arrays. An application is vulnerable
> > only
> > > > with
> > > > > certain customized choices for deserialization." "fixed"
> "[2.12.7.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2022-42003" "High"
> "In
> > > > > FasterXML jackson-databind before versions 2.13.4.1 and 2.12.17.1,
> > > > resource
> > > > > exhaustion can occur because of a lack of a check in primitive
> value
> > > > > deserializers to avoid deep wrapper array nesting, when the
> > > > > UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled." "fixed"
> "[2.12.7.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2021-20190" "High"
> "A
> > > flaw
> > > > > was found in jackson-databind before 2.9.10.7. FasterXML mishandles
> > the
> > > > > interaction between serialization gadgets and typing. The highest
> > > threat
> > > > > from this vulnerability is to data confidentiality and integrity as
> > > well
> > > > as
> > > > > system availability." "fixed" "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36518" "High"
> > > > > "jackson-databind before 2.13.0 allows a Java StackOverflow
> exception
> > > and
> > > > > denial of service via a large depth of nested objects." "fixed"
> > > > > "[2.12.6.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36189" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.newrelic.agent.deps.ch
> > > > .qos.logback.core.db.DriverManagerConnectionSource."
> > > > > "fixed" "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36188" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.newrelic.agent.deps.ch
> > .qos.logback.core.db.JNDIConnectionSource."
> > > > > "fixed" "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36187" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp.datasources.SharedPoolDataSource."
> > "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36186" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp.datasources.PerUserPoolDataSource."
> > "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36185" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource."
> > "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36184" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource."
> > > "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36183" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.docx4j.org.apache.xalan.lib.sql.JNDIConnectionPool." "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36182" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp2.cpdsadapter.DriverAdapterCPDS."
> "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36181" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS." "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36180" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.commons.dbcp2.cpdsadapter.DriverAdapterCPDS." "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-36179" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > oadd.org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS."
> "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-35728" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.oracle.wls.shaded.org.apache.xalan.lib.sql.JNDIConnectionPool
> > (aka
> > > > > embedded Xalan in org.glassfish.web/javax.servlet.jsp.jstl)."
> "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-35491" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.commons.dbcp2.datasources.SharedPoolDataSource." "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-35490" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > org.apache.commons.dbcp2.datasources.PerUserPoolDataSource."
> "fixed"
> > > > > "[2.9.10.8]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-24750" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.pastdev.httpcomponents.configuration.JndiConfiguration."
> "fixed"
> > > > > "[2.6.7.5]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-24616" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.6 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > br.com.anteros.dbcp.AnterosDBCPDataSource (aka Anteros-DBCP)."
> > "fixed"
> > > > > "[2.9.10.6]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-10673" "High"
> > > > > "FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the
> > > > interaction
> > > > > between serialization gadgets and typing, related to
> > > > > com.caucho.config.types.ResourceRef (aka caucho-quercus)." "fixed"
> > > > > "[2.6.7.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2020-10650" "High"
> "A
> > > > > deserialization flaw was discovered in jackson-databind through
> > > 2.9.10.4.
> > > > > It could allow an unauthenticated user to perform code execution
> via
> > > > > ignite-jta or quartz-core:
> > > > > org.apache.ignite.cache.jta.jndi.CacheJndiTmLookup,
> > > > > org.apache.ignite.cache.jta.jndi.CacheJndiTmFactory, and
> > > > > org.quartz.utils.JNDIConnectionProvider." "fixed" "[2.9.10.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-14439" "High"
> "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > 2.x
> > > > > before 2.9.9.2. This occurs when Default Typing is enabled (either
> > > > globally
> > > > > or for a specific property) for an externally exposed JSON endpoint
> > and
> > > > the
> > > > > service has the logback jar in the classpath." "fixed" "[2.6.7.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-12086" "High"
> "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > 2.x
> > > > > before 2.9.9. When Default Typing is enabled (either globally or
> for
> > a
> > > > > specific property) for an externally exposed JSON endpoint, the
> > service
> > > > has
> > > > > the mysql-connector-java jar (8.0.14 or earlier) in the classpath,
> > and
> > > an
> > > > > attacker can host a crafted MySQL server reachable by the victim,
> an
> > > > > attacker can send a crafted JSON message that allows them to read
> > > > arbitrary
> > > > > local files on the server. This occurs because of missing
> > > > > com.mysql.cj.jdbc.admin.MiniAdmin validation." "fixed" "[2.9.9]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-5968" "High"
> > > > > "FasterXML jackson-databind through 2.8.11 and 2.9.x through 2.9.3
> > > allows
> > > > > unauthenticated remote code execution because of an incomplete fix
> > for
> > > > the
> > > > > CVE-2017-7525 and CVE-2017-17485 deserialization flaws. This is
> > > > exploitable
> > > > > via two different gadgets that bypass a blacklist." "not-fixed"
> "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2018-12022" "High"
> "An
> > > > > issue was discovered in FasterXML jackson-databind prior to
> 2.7.9.4,
> > > > > 2.8.11.2, and 2.9.6. When Default Typing is enabled (either
> globally
> > or
> > > > for
> > > > > a specific property), the service has the Jodd-db jar (for database
> > > > access
> > > > > for the Jodd framework) in the classpath, and an attacker can
> provide
> > > an
> > > > > LDAP service to access, it is possible to make the service execute
> a
> > > > > malicious payload." "fixed" "[2.7.9.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-12814" "Medium"
> > "A
> > > > > Polymorphic Typing issue was discovered in FasterXML
> jackson-databind
> > > 2.x
> > > > > through 2.9.9. When Default Typing is enabled (either globally or
> > for a
> > > > > specific property) for an externally exposed JSON endpoint and the
> > > > service
> > > > > has JDOM 1.x or 2.x jar in the classpath, an attacker can send a
> > > > > specifically crafted JSON message that allows them to read
> arbitrary
> > > > local
> > > > > files on the server." "fixed" "[2.9.9.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jackson-databind" "2.4.0" "java-archive" "CVE-2019-12384" "Medium"
> > > > > "FasterXML jackson-databind 2.x before 2.9.9.1 might allow
> attackers
> > to
> > > > > have a variety of impacts by leveraging failure to block the
> > > logback-core
> > > > > class from polymorphic deserialization. Depending on the classpath
> > > > content,
> > > > > remote code execution may be possible." "fixed" "[2.9.9.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jettison" "1.1" "java-archive" "CVE-2023-1436" "High" "An infinite
> > > > > recursion is triggered in Jettison when constructing a JSONArray
> > from a
> > > > > Collection that contains a self-reference in one of its elements.
> > This
> > > > > leads to a StackOverflowError exception being thrown. " "fixed"
> > > "[1.5.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jettison" "1.1" "java-archive" "CVE-2022-45693" "High" "Jettison
> > > before
> > > > > v1.5.2 was discovered to contain a stack overflow via the map
> > > parameter.
> > > > > This vulnerability allows attackers to cause a Denial of Service
> > (DoS)
> > > > via
> > > > > a crafted string." "fixed" "[1.5.2]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jettison" "1.1" "java-archive" "CVE-2022-45685" "High" "A stack
> > > overflow
> > > > > in Jettison before v1.5.2 allows attackers to cause a Denial of
> > Service
> > > > > (DoS) via crafted JSON data." "fixed" "[1.5.2]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jettison" "1.1" "java-archive" "CVE-2022-40150" "High" "Those
> using
> > > > > Jettison to parse untrusted XML or JSON data may be vulnerable to
> > > Denial
> > > > of
> > > > > Service attacks (DOS). If the parser is running on user supplied
> > input,
> > > > an
> > > > > attacker may supply content that causes the parser to crash by Out
> of
> > > > > memory. This effect may support a denial of service attack."
> "fixed"
> > > > > "[1.5.2]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jettison" "1.1" "java-archive" "CVE-2022-40149" "High" "Those
> using
> > > > > Jettison to parse untrusted XML or JSON data may be vulnerable to
> > > Denial
> > > > of
> > > > > Service attacks (DOS). If the parser is running on user supplied
> > input,
> > > > an
> > > > > attacker may supply content that causes the parser to crash by
> > > > > stackoverflow. This effect may support a denial of service attack."
> > > > "fixed"
> > > > > "[1.5.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-http" "9.4.20.v20190813" "java-archive" "CVE-2023-40167"
> > > "Medium"
> > > > > "Jetty is a Java based web server and servlet engine. Prior to
> > versions
> > > > > 9.4.52, 10.0.16, 11.0.16, and 12.0.1, Jetty accepts the `+`
> character
> > > > > proceeding the content-length value in a HTTP/1 header field. This
> is
> > > > more
> > > > > permissive than allowed by the RFC and other servers routinely
> reject
> > > > such
> > > > > requests with 400 responses. There is no known exploit scenario,
> but
> > it
> > > > is
> > > > > conceivable that request smuggling could result if jetty is used in
> > > > > combination with a server that does not close the connection after
> > > > sending
> > > > > such a 400 response. Versions 9.4.52, 10.0.16, 11.0.16, and 12.0.1
> > > > contain
> > > > > a patch for this issue. There is no workaround as there is no known
> > > > exploit
> > > > > scenario." "fixed" "[9.4.52]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-http" "9.4.20.v20190813" "java-archive" "CVE-2022-2047"
> "Low"
> > > "In
> > > > > Eclipse Jetty versions 9.4.0 thru 9.4.46, and 10.0.0 thru 10.0.9,
> and
> > > > > 11.0.0 thru 11.0.9 versions, the parsing of the authority segment
> of
> > an
> > > > > http scheme URI, the Jetty HttpURI class improperly detects an
> > invalid
> > > > > input as a hostname. This can lead to failures in a Proxy
> scenario."
> > > > > "fixed" "[9.4.47]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2021-28165"
> > > "High"
> > > > > "In Eclipse Jetty 7.2.2 to 9.4.38, 10.0.0.alpha0 to 10.0.1, and
> > > > > 11.0.0.alpha0 to 11.0.1, CPU usage can reach 100% upon receiving a
> > > large
> > > > > invalid TLS frame." "fixed" "[9.4.39]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2023-26049"
> > > > "Medium"
> > > > > "Jetty is a java based web server and servlet engine. Nonstandard
> > > cookie
> > > > > parsing in Jetty may allow an attacker to smuggle cookies within
> > other
> > > > > cookies, or otherwise perform unintended behavior by tampering with
> > the
> > > > > cookie parsing mechanism. If Jetty sees a cookie VALUE that starts
> > with
> > > > `"`
> > > > > (double quote), it will continue to read the cookie string until it
> > > sees
> > > > a
> > > > > closing quote -- even if a semicolon is encountered. So, a cookie
> > > header
> > > > > such as: `DISPLAY_LANGUAGE="b; JSESSIONID=1337; c=d"` will be
> parsed
> > as
> > > > one
> > > > > cookie, with the name DISPLAY_LANGUAGE and a value of b;
> > > JSESSIONID=1337;
> > > > > c=d instead of 3 separate cookies. This has security implications
> > > because
> > > > > if, say, JSESSIONID is an HttpOnly cookie, and the DISPLAY_LANGUAGE
> > > > cookie
> > > > > value is rendered on the page, an attacker can smuggle the
> JSESSIONID
> > > > > cookie into the DISPLAY_LANGUAGE cookie and thereby exfiltrate it.
> > This
> > > > is
> > > > > significant when an intermediary is enacting some policy based on
> > > > cookies,
> > > > > so a smuggled cookie can bypass that policy yet still be seen by
> the
> > > > Jetty
> > > > > server or its logging system. This issue has been addressed in
> > versions
> > > > > 9.4.51, 10.0.14, 11.0.14, and 12.0.0.beta0 and users are advised to
> > > > > upgrade. There are no known workarounds for this issue." "fixed"
> > > > > "[9.4.51.v20230217]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2023-26048"
> > > > "Medium"
> > > > > "Jetty is a java based web server and servlet engine. In affected
> > > > versions
> > > > > servlets with multipart support (e.g. annotated with
> > > `@MultipartConfig`)
> > > > > that call `HttpServletRequest.getParameter()` or
> > > > > `HttpServletRequest.getParts()` may cause `OutOfMemoryError` when
> the
> > > > > client sends a multipart request with a part that has a name but no
> > > > > filename and very large content. This happens even with the default
> > > > > settings of `fileSizeThreshold=0` which should stream the whole
> part
> > > > > content to disk. An attacker client may send a large multipart
> > request
> > > > and
> > > > > cause the server to throw `OutOfMemoryError`. However, the server
> may
> > > be
> > > > > able to recover after the `OutOfMemoryError` and continue its
> service
> > > --
> > > > > although it may take some time. This issue has been patched in
> > versions
> > > > > 9.4.51, 10.0.14, and 11.0.14. Users are advised to upgrade. Users
> > > unable
> > > > to
> > > > > upgrade may set the multipart parameter `maxRequestSize` which must
> > be
> > > > set
> > > > > to a non-negative value, so the whole multipart content is limited
> > > > > (although still read into memory)." "fixed" "[9.4.51.v20230217]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2020-27223"
> > > > "Medium"
> > > > > "In Eclipse Jetty 9.4.6.v20170531 to 9.4.36.v20210114 (inclusive),
> > > > 10.0.0,
> > > > > and 11.0.0 when Jetty handles a request containing multiple Accept
> > > > headers
> > > > > with a large number of “quality†(i.e. q) parameters, the server
> > may
> > > > > enter a denial of service (DoS) state due to high CPU usage
> > processing
> > > > > those quality values, resulting in minutes of CPU time exhausted
> > > > processing
> > > > > those quality values." "fixed" "[9.4.37]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2020-27218"
> > > > "Medium"
> > > > > "In Eclipse Jetty version 9.4.0.RC0 to 9.4.34.v20201102,
> > 10.0.0.alpha0
> > > to
> > > > > 10.0.0.beta2, and 11.0.0.alpha0 to 11.0.0.beta2, if GZIP request
> body
> > > > > inflation is enabled and requests from different clients are
> > > multiplexed
> > > > > onto a single connection, and if an attacker can send a request
> with
> > a
> > > > body
> > > > > that is received entirely but not consumed by the application,
> then a
> > > > > subsequent request on the same connection will see that body
> > prepended
> > > to
> > > > > its body. The attacker will not see any data but may inject data
> into
> > > the
> > > > > body of the subsequent request." "fixed" "[9.4.35.v20201120]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-server" "9.4.20.v20190813" "java-archive" "CVE-2021-34428"
> > "Low"
> > > > > "For Eclipse Jetty versions <= 9.4.40, <= 10.0.2, <= 11.0.2, if an
> > > > > exception is thrown from the SessionListener#sessionDestroyed()
> > method,
> > > > > then the session ID is not invalidated in the session ID manager.
> On
> > > > > deployments with clustered sessions and multiple contexts this can
> > > result
> > > > > in a session not being invalidated. This can result in an
> application
> > > > used
> > > > > on a shared computer being left logged in." "fixed" "[9.4.41]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-webapp" "9.4.20.v20190813" "java-archive" "CVE-2020-27216"
> > > "High"
> > > > > "In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1
> > > thru
> > > > > 10.0.0.beta2, and 11.0.0.alpha1 thru 11.0.0.beta2O, on Unix like
> > > systems,
> > > > > the system's temporary directory is shared between all users on
> that
> > > > > system. A collocated user can observe the process of creating a
> > > temporary
> > > > > sub directory in the shared temporary directory and race to
> complete
> > > the
> > > > > creation of the temporary subdirectory. If the attacker wins the
> race
> > > > then
> > > > > they will have read and write permission to the subdirectory used
> to
> > > > unpack
> > > > > web applications, including their WEB-INF/lib jar files and JSP
> > files.
> > > If
> > > > > any code is ever executed out of this temporary directory, this can
> > > lead
> > > > to
> > > > > a local privilege escalation vulnerability." "fixed"
> > > "[9.4.33.v20201020]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "jetty-xml" "9.4.20.v20190813" "java-archive" "GHSA-58qw-p7qm-5rvh"
> > > "Low"
> > > > > "Eclipse Jetty XmlParser allows arbitrary DOCTYPE declarations"
> > "fixed"
> > > > > "[9.4.52]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "log4j" "1.2.17" "java-archive" "CVE-2022-23305" "Critical" "By
> > design,
> > > > > the JDBCAppender in Log4j 1.2.x accepts an SQL statement as a
> > > > configuration
> > > > > parameter where the values to be inserted are converters from
> > > > > PatternLayout. The message converter, %m, is likely to always be
> > > > included.
> > > > > This allows attackers to manipulate the SQL by entering crafted
> > strings
> > > > > into input fields or headers of an application that are logged
> > allowing
> > > > > unintended SQL queries to be executed. Note this issue only affects
> > > Log4j
> > > > > 1.x when specifically configured to use the JDBCAppender, which is
> > not
> > > > the
> > > > > default. Beginning in version 2.0-beta8, the JDBCAppender was
> > > > re-introduced
> > > > > with proper support for parameterized SQL queries and further
> > > > customization
> > > > > over the columns written to in logs. Apache Log4j 1.2 reached end
> of
> > > life
> > > > > in August 2015. Users should upgrade to Log4j 2 as it addresses
> > > numerous
> > > > > other issues from the previous versions." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "log4j" "1.2.17" "java-archive" "CVE-2019-17571" "Critical"
> "Included
> > > in
> > > > > Log4j 1.2 is a SocketServer class that is vulnerable to
> > deserialization
> > > > of
> > > > > untrusted data which can be exploited to remotely execute arbitrary
> > > code
> > > > > when combined with a deserialization gadget when listening to
> > untrusted
> > > > > network traffic for log data. This affects Log4j versions up to 1.2
> > up
> > > to
> > > > > 1.2.17." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "log4j" "1.2.17" "java-archive" "CVE-2022-23307" "High"
> > "CVE-2020-9493
> > > > > identified a deserialization issue that was present in Apache
> > Chainsaw.
> > > > > Prior to Chainsaw V2.0 Chainsaw was a component of Apache Log4j
> 1.2.x
> > > > where
> > > > > the same issue exists." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "log4j" "1.2.17" "java-archive" "CVE-2022-23302" "High" "JMSSink in
> > all
> > > > > versions of Log4j 1.x is vulnerable to deserialization of untrusted
> > > data
> > > > > when the attacker has write access to the Log4j configuration or if
> > the
> > > > > configuration references an LDAP service the attacker has access
> to.
> > > The
> > > > > attacker can provide a TopicConnectionFactoryBindingName
> > configuration
> > > > > causing JMSSink to perform JNDI requests that result in remote code
> > > > > execution in a similar fashion to CVE-2021-4104. Note this issue
> only
> > > > > affects Log4j 1.x when specifically configured to use JMSSink,
> which
> > is
> > > > not
> > > > > the default. Apache Log4j 1.2 reached end of life in August 2015.
> > Users
> > > > > should upgrade to Log4j 2 as it addresses numerous other issues
> from
> > > the
> > > > > previous versions." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "log4j" "1.2.17" "java-archive" "CVE-2021-4104" "High" "JMSAppender
> > in
> > > > > Log4j 1.2 is vulnerable to deserialization of untrusted data when
> the
> > > > > attacker has write access to the Log4j configuration. The attacker
> > can
> > > > > provide TopicBindingName and TopicConnectionFactoryBindingName
> > > > > configurations causing JMSAppender to perform JNDI requests that
> > result
> > > > in
> > > > > remote code execution in a similar fashion to CVE-2021-44228. Note
> > this
> > > > > issue only affects Log4j 1.2 when specifically configured to use
> > > > > JMSAppender, which is not the default. Apache Log4j 1.2 reached end
> > of
> > > > life
> > > > > in August 2015. Users should upgrade to Log4j 2 as it addresses
> > > numerous
> > > > > other issues from the previous versions." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2019-20445" "Critical"
> > > > > "HttpObjectDecoder.java in Netty before 4.1.44 allows a
> > Content-Length
> > > > > header to be accompanied by a second Content-Length header, or by a
> > > > > Transfer-Encoding header." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2019-20444" "Critical"
> > > > > "HttpObjectDecoder.java in Netty before 4.1.44 allows an HTTP
> header
> > > that
> > > > > lacks a colon, which might be interpreted as a separate header with
> > an
> > > > > incorrect syntax, or might be interpreted as an "invalid fold.""
> > > > > "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-37137" "High" "The
> > > Snappy
> > > > > frame decoder function doesn't restrict the chunk length which may
> > lead
> > > > to
> > > > > excessive memory usage. Beside this it also may buffer reserved
> > > skippable
> > > > > chunks until the whole chunk was received which may lead to
> excessive
> > > > > memory usage as well. This vulnerability can be triggered by
> > supplying
> > > > > malicious input that decompresses to a very big size (via a network
> > > > stream
> > > > > or a file) or by sending a huge skippable chunk." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-37136" "High" "The
> > > Bzip2
> > > > > decompression decoder function doesn't allow setting size
> > restrictions
> > > on
> > > > > the decompressed output data (which affects the allocation size
> used
> > > > during
> > > > > decompression). All users of Bzip2Decoder are affected. The
> malicious
> > > > input
> > > > > can trigger an OOME and so a DoS attack" "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-43797" "Medium"
> > "Netty
> > > is
> > > > > an asynchronous event-driven network application framework for
> rapid
> > > > > development of maintainable high performance protocol servers &
> > > clients.
> > > > > Netty prior to version 4.1.71.Final skips control chars when they
> are
> > > > > present at the beginning / end of the header name. It should
> instead
> > > fail
> > > > > fast as these are not allowed by the spec and could lead to HTTP
> > > request
> > > > > smuggling. Failing to do the validation might cause netty to
> > "sanitize"
> > > > > header names before it forward these to another remote system when
> > used
> > > > as
> > > > > proxy. This remote system can't see the invalid usage anymore, and
> > > > > therefore does not do the validation itself. Users should upgrade
> to
> > > > > version 4.1.71.Final." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-21409" "Medium"
> > "Netty
> > > is
> > > > > an open-source, asynchronous event-driven network application
> > framework
> > > > for
> > > > > rapid development of maintainable high performance protocol
> servers &
> > > > > clients. In Netty (io.netty:netty-codec-http2) before version
> > > > 4.1.61.Final
> > > > > there is a vulnerability that enables request smuggling. The
> > > > content-length
> > > > > header is not correctly validated if the request only uses a single
> > > > > Http2HeaderFrame with the endStream set to to true. This could lead
> > to
> > > > > request smuggling if the request is proxied to a remote peer and
> > > > translated
> > > > > to HTTP/1.1. This is a followup of
> GHSA-wm47-8v5p-wjpj/CVE-2021-21295
> > > > which
> > > > > did miss to fix this one case. This was fixed as part of
> > 4.1.61.Final."
> > > > > "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-21295" "Medium"
> > "Netty
> > > is
> > > > > an open-source, asynchronous event-driven network application
> > framework
> > > > for
> > > > > rapid development of maintainable high performance protocol
> servers &
> > > > > clients. In Netty (io.netty:netty-codec-http2) before version
> > > > 4.1.60.Final
> > > > > there is a vulnerability that enables request smuggling. If a
> > > > > Content-Length header is present in the original HTTP/2 request,
> the
> > > > field
> > > > > is not validated by `Http2MultiplexHandler` as it is propagated up.
> > > This
> > > > is
> > > > > fine as long as the request is not proxied through as HTTP/1.1. If
> > the
> > > > > request comes in as an HTTP/2 stream, gets converted into the
> > HTTP/1.1
> > > > > domain objects (`HttpRequest`, `HttpContent`, etc.) via
> > > > > `Http2StreamFrameToHttpObjectCodec `and then sent up to the child
> > > > channel's
> > > > > pipeline and proxied through a remote peer as HTTP/1.1 this may
> > result
> > > in
> > > > > request smuggling. In a proxy case, users may assume the
> > content-length
> > > > is
> > > > > validated somehow, which is not the case. If the request is
> forwarded
> > > to
> > > > a
> > > > > backend channel that is a HTTP/1.1 connection, the Content-Length
> now
> > > has
> > > > > meaning and needs to be checked. An attacker can smuggle requests
> > > inside
> > > > > the body as it gets downgraded from HTTP/2 to HTTP/1.1. For an
> > example
> > > > > attack refer to the linked GitHub Advisory. Users are only affected
> > if
> > > > all
> > > > > of this is true: `HTTP2MultiplexCodec` or `Http2FrameCodec` is
> used,
> > > > > `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1
> > > > objects,
> > > > > and these HTTP/1.1 objects are forwarded to another remote peer.
> This
> > > has
> > > > > been patched in 4.1.60.Final As a workaround, the user can do the
> > > > > validation by themselves by implementing a custom
> > > `ChannelInboundHandler`
> > > > > that is put in the `ChannelPipeline` behind
> > > > > `Http2StreamFrameToHttpObjectCodec`." "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty" "3.10.6.Final" "java-archive" "CVE-2021-21290" "Medium"
> > "Netty
> > > is
> > > > > an open-source, asynchronous event-driven network application
> > framework
> > > > for
> > > > > rapid development of maintainable high performance protocol
> servers &
> > > > > clients. In Netty before version 4.1.59.Final there is a
> > vulnerability
> > > on
> > > > > Unix-like systems involving an insecure temp file. When netty's
> > > multipart
> > > > > decoders are used local information disclosure can occur via the
> > local
> > > > > system temporary directory if temporary storing uploads on the disk
> > is
> > > > > enabled. On unix-like systems, the temporary directory is shared
> > > between
> > > > > all user. As such, writing to this directory using APIs that do not
> > > > > explicitly set the file/directory permissions can lead to
> information
> > > > > disclosure. Of note, this does not impact modern MacOS Operating
> > > Systems.
> > > > > The method "File.createTempFile" on unix-like systems creates a
> > random
> > > > > file, but, by default will create this file with the permissions
> > > > > "-rw-r--r--". Thus, if sensitive information is written to this
> file,
> > > > other
> > > > > local users can read this information. This is the case in netty's
> > > > > "AbstractDiskHttpData" is vulnerable. This has been fixed in
> version
> > > > > 4.1.59.Final. As a workaround, one may specify your own
> > > "java.io.tmpdir"
> > > > > when you start the JVM or use
> > "DefaultHttpDataFactory.setBaseDir(...)"
> > > to
> > > > > set the directory to something that is only readable by the current
> > > > user."
> > > > > "not-fixed" "[]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty-codec-http2" "4.1.87.Final" "java-archive"
> > "GHSA-xpw8-rcwv-8f8p"
> > > > > "High" "io.netty:netty-codec-http2 vulnerable to HTTP/2 Rapid Reset
> > > > Attack"
> > > > > "fixed" "[4.1.100.Final]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "netty-handler" "4.1.87.Final" "java-archive" "CVE-2023-34462"
> > "Medium"
> > > > > "Netty is an asynchronous event-driven network application
> framework
> > > for
> > > > > rapid development of maintainable high performance protocol
> servers &
> > > > > clients. The `SniHandler` can allocate up to 16MB of heap for each
> > > > channel
> > > > > during the TLS handshake. When the handler or the channel does not
> > have
> > > > an
> > > > > idle timeout, it can be used to make a TCP server using the
> > > `SniHandler`
> > > > to
> > > > > allocate 16MB of heap. The `SniHandler` class is a handler that
> waits
> > > for
> > > > > the TLS handshake to configure a `SslHandler` according to the
> > > indicated
> > > > > server name by the `ClientHello` record. For this matter it
> > allocates a
> > > > > `ByteBuf` using the value defined in the `ClientHello` record.
> > Normally
> > > > the
> > > > > value of the packet should be smaller than the handshake packet but
> > > there
> > > > > are not checks done here and the way the code is written, it is
> > > possible
> > > > to
> > > > > craft a packet that makes the `SslClientHelloHandler`. This
> > > vulnerability
> > > > > has been fixed in version 4.1.94.Final." "fixed" "[4.1.94.Final]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "okio" "1.6.0" "java-archive" "CVE-2023-3635" "High" "GzipSource
> does
> > > not
> > > > > handle an exception that might be raised when parsing a malformed
> > gzip
> > > > > buffer. This may lead to denial of service of the Okio client when
> > > > handling
> > > > > a crafted GZIP archive, by using the GzipSource class. " "fixed"
> > > > "[1.17.6]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "protobuf-java" "2.5.0" "java-archive" "CVE-2022-3510" "High" "A
> > > parsing
> > > > > issue similar to CVE-2022-3171, but with Message-Type Extensions in
> > > > > protobuf-java core and lite versions prior to 3.21.7, 3.20.3,
> 3.19.6
> > > and
> > > > > 3.16.3 can lead to a denial of service attack. Inputs containing
> > > multiple
> > > > > instances of non-repeated embedded messages with repeated or
> unknown
> > > > fields
> > > > > causes objects to be converted back-n-forth between mutable and
> > > immutable
> > > > > forms, resulting in potentially long garbage collection pauses. We
> > > > > recommend updating to the versions mentioned above. " "fixed"
> > > "[3.16.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "protobuf-java" "2.5.0" "java-archive" "CVE-2022-3509" "High" "A
> > > parsing
> > > > > issue similar to CVE-2022-3171, but with textformat in
> protobuf-java
> > > core
> > > > > and lite versions prior to 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can
> lead
> > > to
> > > > a
> > > > > denial of service attack. Inputs containing multiple instances of
> > > > > non-repeated embedded messages with repeated or unknown fields
> causes
> > > > > objects to be converted back-n-forth between mutable and immutable
> > > forms,
> > > > > resulting in potentially long garbage collection pauses. We
> recommend
> > > > > updating to the versions mentioned above." "fixed" "[3.16.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "protobuf-java" "2.5.0" "java-archive" "CVE-2022-3171" "High" "A
> > > parsing
> > > > > issue with binary data in protobuf-java core and lite versions
> prior
> > to
> > > > > 3.21.7, 3.20.3, 3.19.6 and 3.16.3 can lead to a denial of service
> > > attack.
> > > > > Inputs containing multiple instances of non-repeated embedded
> > messages
> > > > with
> > > > > repeated or unknown fields causes objects to be converted
> > back-n-forth
> > > > > between mutable and immutable forms, resulting in potentially long
> > > > garbage
> > > > > collection pauses. We recommend updating to the versions mentioned
> > > > above."
> > > > > "fixed" "[3.16.3]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "protobuf-java" "2.5.0" "java-archive" "CVE-2021-22570" "Medium"
> > > "Nullptr
> > > > > dereference when a null char is present in a proto symbol. The
> symbol
> > > is
> > > > > parsed incorrectly, leading to an unchecked call into the proto
> > file's
> > > > name
> > > > > during generation of the resulting error message. Since the symbol
> is
> > > > > incorrectly parsed, the file is nullptr. We recommend upgrading to
> > > > version
> > > > > 3.15.0 or greater." "fixed" "[3.15.0]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "protobuf-java" "2.5.0" "java-archive" "CVE-2021-22569" "Medium"
> "An
> > > > issue
> > > > > in protobuf-java allowed the interleaving of
> > > > > com.google.protobuf.UnknownFieldSet fields in such a way that would
> > be
> > > > > processed out of order. A small malicious payload can occupy the
> > parser
> > > > for
> > > > > several minutes by creating large numbers of short-lived objects
> that
> > > > cause
> > > > > frequent, repeated pauses. We recommend upgrading libraries beyond
> > the
> > > > > vulnerable versions." "fixed" "[3.16.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "snappy-java" "1.0.5" "java-archive" "CVE-2023-43642" "High"
> > > "snappy-java
> > > > > is a Java port of the snappy, a fast C++ compresser/decompresser
> > > > developed
> > > > > by Google. The SnappyInputStream was found to be vulnerable to
> Denial
> > > of
> > > > > Service (DoS) attacks when decompressing data with a too large
> chunk
> > > > size.
> > > > > Due to missing upper bound check on chunk length, an unrecoverable
> > > fatal
> > > > > error can occur. All versions of snappy-java including the latest
> > > > released
> > > > > version 1.1.10.3 are vulnerable to this issue. A fix has been
> > > introduced
> > > > in
> > > > > commit `9f8c3cf74` which will be included in the 1.1.10.4 release.
> > > Users
> > > > > are advised to upgrade. Users unable to upgrade should only accept
> > > > > compressed data from trusted sources." "fixed" "[1.1.10.4]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "snappy-java" "1.0.5" "java-archive" "CVE-2023-34455" "High"
> > > "snappy-java
> > > > > is a fast compressor/decompressor for Java. Due to use of an
> > unchecked
> > > > > chunk length, an unrecoverable fatal error can occur in versions
> > prior
> > > to
> > > > > 1.1.10.1. The code in the function hasNextChunk in the
> > > > > fileSnappyInputStream.java checks if a given stream has more chunks
> > to
> > > > > read. It does that by attempting to read 4 bytes. If it wasn’t
> > > possible
> > > > > to read the 4 bytes, the function returns false. Otherwise, if 4
> > bytes
> > > > were
> > > > > available, the code treats them as the length of the next chunk. In
> > the
> > > > > case that the `compressed` variable is null, a byte array is
> > allocated
> > > > with
> > > > > the size given by the input data. Since the code doesn’t test the
> > > > > legality of the `chunkSize` variable, it is possible to pass a
> > negative
> > > > > number (such as 0xFFFFFFFF which is -1), which will cause the code
> to
> > > > raise
> > > > > a `java.lang.NegativeArraySizeException` exception. A worse case
> > would
> > > > > happen when passing a huge positive value (such as 0x7FFFFFFF),
> which
> > > > would
> > > > > raise the fatal `java.lang.OutOfMemoryError` error. Version
> 1.1.10.1
> > > > > contains a patch for this issue." "fixed" "[1.1.10.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "snappy-java" "1.0.5" "java-archive" "CVE-2023-34454" "High"
> > > "snappy-java
> > > > > is a fast compressor/decompressor for Java. Due to unchecked
> > > > > multiplications, an integer overflow may occur in versions prior to
> > > > > 1.1.10.1, causing an unrecoverable fatal error. The function
> > > > > `compress(char[] input)` in the file `Snappy.java` receives an
> array
> > of
> > > > > characters and compresses it. It does so by multiplying the length
> > by 2
> > > > and
> > > > > passing it to the rawCompress` function. Since the length is not
> > > tested,
> > > > > the multiplication by two can cause an integer overflow and become
> > > > > negative. The rawCompress function then uses the received length
> and
> > > > passes
> > > > > it to the natively compiled maxCompressedLength function, using the
> > > > > returned value to allocate a byte array. Since the
> > maxCompressedLength
> > > > > function treats the length as an unsigned integer, it doesn’t
> care
> > > that
> > > > > it is negative, and it returns a valid value, which is casted to a
> > > signed
> > > > > integer by the Java engine. If the result is negative, a
> > > > > `java.lang.NegativeArraySizeException` exception will be raised
> while
> > > > > trying to allocate the array `buf`. On the other side, if the
> result
> > is
> > > > > positive, the `buf` array will successfully be allocated, but its
> > size
> > > > > might be too small to use for the compression, causing a fatal
> Access
> > > > > Violation error. The same issue exists also when using the
> `compress`
> > > > > functions that receive double, float, int, long and short, each
> > using a
> > > > > different multiplier that may cause the same issue. The issue most
> > > likely
> > > > > won’t occur when using a byte array, since creating a byte array
> of
> > > > size
> > > > > 0x80000000 (or any other negative value) is impossible in the first
> > > > place.
> > > > > Version 1.1.10.1 contains a patch for this issue." "fixed"
> > "[1.1.10.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "snappy-java" "1.0.5" "java-archive" "CVE-2023-34453" "High"
> > > "snappy-java
> > > > > is a fast compressor/decompressor for Java. Due to unchecked
> > > > > multiplications, an integer overflow may occur in versions prior to
> > > > > 1.1.10.1, causing a fatal error. The function `shuffle(int[]
> input)`
> > in
> > > > the
> > > > > file `BitShuffle.java` receives an array of integers and applies a
> > bit
> > > > > shuffle on it. It does so by multiplying the length by 4 and
> passing
> > it
> > > > to
> > > > > the natively compiled shuffle function. Since the length is not
> > tested,
> > > > the
> > > > > multiplication by four can cause an integer overflow and become a
> > > smaller
> > > > > value than the true size, or even zero or negative. In the case of
> a
> > > > > negative value, a `java.lang.NegativeArraySizeException` exception
> > will
> > > > > raise, which can crash the program. In a case of a value that is
> zero
> > > or
> > > > > too small, the code that afterwards references the shuffled array
> > will
> > > > > assume a bigger size of the array, which might cause exceptions
> such
> > as
> > > > > `java.lang.ArrayIndexOutOfBoundsException`. The same issue exists
> > also
> > > > when
> > > > > using the `shuffle` functions that receive a double, float, long
> and
> > > > short,
> > > > > each using a different multiplier that may cause the same issue.
> > > Version
> > > > > 1.1.10.1 contains a patch for this vulnerability." "fixed"
> > "[1.1.10.1]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "woodstox-core" "5.0.3" "java-archive" "CVE-2022-40152" "High"
> "Those
> > > > > using Woodstox to parse XML data may be vulnerable to Denial of
> > Service
> > > > > attacks (DOS) if DTD support is enabled. If the parser is running
> on
> > > user
> > > > > supplied input, an attacker may supply content that causes the
> parser
> > > to
> > > > > crash by stackoverflow. This effect may support a denial of service
> > > > > attack." "fixed" "[5.4.0]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > > "zookeeper" "3.5.7" "java-archive" "CVE-2023-44981" "Critical"
> > > > > "Authorization Bypass Through User-Controlled Key vulnerability in
> > > Apache
> > > > > ZooKeeper. If SASL Quorum Peer authentication is enabled in
> ZooKeeper
> > > > > (quorum.auth.enableSasl=true), the authorization is done by
> verifying
> > > > that
> > > > > the instance part in SASL authentication ID is listed in zoo.cfg
> > server
> > > > > list. The instance part in SASL auth ID is optional and if it's
> > > missing,
> > > > > like 'e...@example.com', the authorization check will be skipped.Â
> > As a
> > > > > result an arbitrary endpoint could join the cluster and begin
> > > propagating
> > > > > counterfeit changes to the leader, essentially giving it complete
> > > > > read-write access to the data tree. Quorum Peer authentication is
> > not
> > > > > enabled by default. Users are recommended to upgrade to version
> > 3.9.1,
> > > > > 3.8.3, 3.7.2, which fixes the issue. Alternately ensure the
> ensemble
> > > > > election/quorum communication is protected by a firewall as this
> will
> > > > > mitigate the issue. See the documentation for more details on
> correct
> > > > > cluster administration. " "fixed" "[3.7.2]"
> > > > >
> > > >
> > >
> >
> /usr/lib/trino/plugin/phoenix5/phoenix-client-hbase-2.4-5.1.4-SNAPSHOT.jar
> > > > >
> > > > >
> > > > > On Mon, Feb 19, 2024 at 9:52 AM Istvan Toth
> > <st...@cloudera.com.invalid
> > > >
> > > > > wrote:
> > > > >
> > > > >> HI,
> > > > >>
> > > > >> I can't see an attachment on this email.
> > > > >>
> > > > >> Istvan
> > > > >>
> > > > >> On Sun, Feb 18, 2024 at 6:02 PM Mateusz Gajewski <
> > > > >> mateusz.gajew...@starburstdata.com> wrote:
> > > > >>
> > > > >> > Hi Phoenix team,
> > > > >> >
> > > > >> > I've built and tested upcoming 5.1.4 version by building it from
> > the
> > > > 5.1
> > > > >> > branch (5.1.3-124-gb6ca402f9) and would like to ask to address
> > > several
> > > > >> CVEs
> > > > >> > before releasing 5.1.4. Phoenix integration in Trino (
> > > > >> > https://github.com/trinodb/trino) is one of two connectors with
> > > > really
> > > > >> > high number of CVEs that we would like to remove from our
> > codebase -
> > > > >> either
> > > > >> > by updating a connector to a newer, CVE-free dependency or by
> > > dropping
> > > > >> > connector code and support for Phoenix (actually Phoenix5
> accounts
> > > for
> > > > >> 95%
> > > > >> > of remaining CVEs in our codebase).
> > > > >> >
> > > > >> > I'm attaching a list of detected vulnerabilities.
> > > > >> >
> > > > >> > Please let me know how we can workaround these vulnerabilities.
> > > > >> >
> > > > >>
> > > > >>
> > > > >> --
> > > > >> *István Tóth* | Sr. Staff Software Engineer
> > > > >> *Email*: st...@cloudera.com
> > > > >> cloudera.com <https://www.cloudera.com>
> > > > >> [image: Cloudera] <https://www.cloudera.com/>
> > > > >> [image: Cloudera on Twitter] <https://twitter.com/cloudera>
> [image:
> > > > >> Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
> > > > >> Cloudera
> > > > >> on LinkedIn] <https://www.linkedi
> <https://www.linkedin.com/company/cloudera>

Reply via email to