This is an automated email from the ASF dual-hosted git repository.

samt pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit be572f27416e9477d5ac996087951ecdcd19e25a
Merge: 259c63f 408f969
Author: Sam Tunnicliffe <[email protected]>
AuthorDate: Fri Jun 12 11:28:03 2020 +0100

    Merge branch 'cassandra-3.11' into trunk

 CHANGES.txt                                        |  1 +
 NEWS.txt                                           |  8 ++++
 .../cassandra/auth/jmx/AuthorizationProxy.java     | 46 ++++++++++-----------
 .../cassandra/cql3/functions/UDFunction.java       | 22 +++++-----
 src/java/org/apache/cassandra/db/Directories.java  | 47 +++++++++++-----------
 ...Directories.java => DisallowedDirectories.java} | 29 ++++++-------
 ...sMBean.java => DisallowedDirectoriesMBean.java} |  2 +-
 .../org/apache/cassandra/db/DiskBoundaries.java    |  2 +-
 .../apache/cassandra/db/DiskBoundaryManager.java   |  6 +--
 .../db/compaction/AbstractCompactionStrategy.java  |  6 +--
 .../cassandra/db/compaction/LeveledManifest.java   |  2 +-
 .../org/apache/cassandra/dht/RangeStreamer.java    | 12 +++---
 .../cassandra/hints/HintsDispatchExecutor.java     |  2 +-
 .../org/apache/cassandra/hints/HintsStore.java     | 10 ++---
 .../cassandra/io/sstable/format/SSTableReader.java |  2 +-
 .../cassandra/service/DefaultFSErrorHandler.java   |  6 +--
 .../apache/cassandra/service/StorageService.java   |  2 +-
 test/unit/org/apache/cassandra/Util.java           |  4 +-
 .../cassandra/auth/jmx/AuthorizationProxyTest.java |  4 +-
 .../org/apache/cassandra/db/DirectoriesTest.java   |  2 +-
 .../cassandra/db/DiskBoundaryManagerTest.java      |  6 +--
 ....java => CorruptedSSTablesCompactionsTest.java} | 17 ++++----
 22 files changed, 124 insertions(+), 114 deletions(-)

diff --cc CHANGES.txt
index 0c4203b,7f54146..5f09cdc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -55,7 -15,10 +55,8 @@@ Merged from 3.0
   * Allow selecting static column only when querying static index 
(CASSANDRA-14242)
   * cqlsh return non-zero status when STDIN CQL fails (CASSANDRA-15623)
   * Don't skip sstables in slice queries based only on local min/max/deletion 
timestamp (CASSANDRA-15690)
 - * Memtable memory allocations may deadlock (CASSANDRA-15367)
 - * Run evictFromMembership in GossipStage (CASSANDRA-15592)
  Merged from 2.2:
+  * Fix nomenclature of allow and deny lists (CASSANDRA-15862)
   * Remove generated files from source artifact (CASSANDRA-15849)
   * Remove duplicated tools binaries from tarballs (CASSANDRA-15768)
   * Duplicate results with DISTINCT queries in mixed mode (CASSANDRA-15501)
diff --cc NEWS.txt
index 8f18659,077bd8b..351f80d
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -116,152 -47,8 +116,160 @@@ New feature
  
  Upgrading
  ---------
 -    - Nothing specific to this release, but please see previous upgrading 
sections,
 -      especially if you are upgrading from 3.0.
 +    - Sstables for tables using with a frozen UDT written by C* 3.0 appear as 
corrupted.
 +
 +      Background: The serialization-header in the -Statistics.db sstable 
component contains the type information
 +      of the table columns. C* 3.0 write incorrect type information for 
frozen UDTs by omitting the
 +      "frozen" information. Non-frozen UDTs were introduced by CASSANDRA-7423 
in C* 3.6. Since then, the missing
 +      "frozen" information leads to deserialization issues that result in 
CorruptSSTableExceptions, potentially other
 +      exceptions as well.
 +
 +      As a mitigation, the sstable serialization-headers are rewritten to 
contain the missing "frozen" information for
 +      UDTs once, when an upgrade from C* 3.0 is detected. This migration does 
not touch snapshots or backups.
 +
 +      The sstablescrub tool now performs a check of the sstable 
serialization-header against the schema. A mismatch of
 +      the types in the serialization-header and the schema will cause 
sstablescrub to error out and stop by default.
 +      See the new `-e` option. `-e off` disables the new validation code. `-e 
fix` or `-e fix-only`, e.g.
 +      `sstablescrub -e fix keyspace table`, will validate the 
serialization-header, rewrite the non-frozen UDTs
 +      in the serialzation-header to frozen UDTs, if that matches the schema, 
and continue with scrub.
 +      See `sstablescrub -h`.
 +      (CASSANDRA-15035)
 +    - CASSANDRA-13241 lowered the default chunk_lengh_in_kb for compresesd 
tables from
 +      64kb to 16kb. For highly compressible data this can have a noticeable 
impact
 +      on space utilization. You may want to consider manually specifying this 
value.
 +    - Additional columns have been added to system_distributed.repair_history,
 +      system_traces.sessions and system_traces.events. As a result select 
queries
 +      against these tables - including queries against tracing tables 
performed
 +      automatically by the drivers and cqlsh - will fail and generate an 
error in the log
 +      during upgrade when the cluster is mixed version. On 3.x side this will 
also lead
 +      to broken internode connections and lost messages.
 +      Cassandra versions 3.0.20 and 3.11.6 pre-add these columns (see 
CASSANDRA-15385),
 +      so please make sure to upgrade to those versions or higher before 
upgrading to
 +      4.0 for query tracing to not cause any issues during the upgrade to 4.0.
 +    - Timestamp ties between values resolve differently: if either value has 
a TTL,
 +      this value always wins. This is to provide consistent reconciliation 
before
 +      and after the value expires into a tombstone.
 +    - Cassandra 4.0 removed support for COMPACT STORAGE tables. All Compact 
Tables
 +      have to be migrated using `ALTER ... DROP COMPACT STORAGE` statement in 
3.0/3.11.
 +      Cassandra starting 4.0 will not start if flags indicate that the table 
is non-CQL.
 +      Syntax for creating compact tables is also deprecated.
 +    - Support for legacy auth tables in the system_auth keyspace (users,
 +      permissions, credentials) and the migration code has been removed. 
Migration
 +      of these legacy auth tables must have been completed before the upgrade 
to
 +      4.0 and the legacy tables must have been removed. See the 'Upgrading' 
section
 +      for version 2.2 for migration instructions.
 +    - Cassandra 4.0 removed support for the deprecated Thrift interface. 
Amongst
 +      other things, this implies the removal of all yaml options related to 
thrift
 +      ('start_rpc', rpc_port, ...).
 +    - Cassandra 4.0 removed support for any pre-3.0 format. This means you
 +      cannot upgrade from a 2.x version to 4.0 directly, you have to upgrade 
to
 +      a 3.0.x/3.x version first (and run upgradesstable). In particular, this
 +      mean Cassandra 4.0 cannot load or read pre-3.0 sstables in any way: you
 +      will need to upgrade those sstable in 3.0.x/3.x first.
 +    - Upgrades from 3.0.x or 3.x are supported since 3.0.13 or 3.11.0, 
previous
 +      versions will causes issues during rolling upgrades (CASSANDRA-13274).
 +    - Cassandra will no longer allow invalid keyspace replication options, 
such
 +      as invalid datacenter names for NetworkTopologyStrategy. Operators MUST
 +      add new nodes to a datacenter before they can set set ALTER or CREATE
 +      keyspace replication policies using that datacenter. Existing keyspaces
 +      will continue to operate, but CREATE and ALTER will validate that all
 +      datacenters specified exist in the cluster.
 +    - Cassandra 4.0 fixes a problem with incremental repair which caused 
repaired
 +      data to be inconsistent between nodes. The fix changes the behavior of 
both
 +      full and incremental repairs. For full repairs, data is no longer marked
 +      repaired. For incremental repairs, anticompaction is run at the 
beginning
 +      of the repair, instead of at the end. If incremental repair was being 
used
 +      prior to upgrading, a full repair should be run after upgrading to 
resolve
 +      any inconsistencies.
 +    - Config option index_interval has been removed (it was deprecated since 
2.0)
 +    - Deprecated repair JMX APIs are removed.
 +    - The version of snappy-java has been upgraded to 1.1.2.6
 +    - the miniumum value for internode message timeouts is 10ms. Previously, 
any
 +      positive value was allowed. See cassandra.yaml entries like
 +      read_request_timeout_in_ms for more details.
 +    - Cassandra 4.0 allows a single port to be used for both secure and 
insecure
 +      connections between cassandra nodes (CASSANDRA-10404). See the yaml for
 +      specific property changes, and see the security doc for full details.
 +    - Due to the parallelization of the initial build of materialized views,
 +      the per token range view building status is stored in the new table
 +      `system.view_builds_in_progress`. The old table 
`system.views_builds_in_progress`
 +      is no longer used and can be removed. See CASSANDRA-12245 for more 
details.
 +    - Config option commitlog_sync_batch_window_in_ms has been deprecated as 
it's
 +      documentation has been incorrect and the setting itself near useless.
 +      Batch mode remains a valid commit log mode, however.
 +    - There is a new commit log mode, group, which is similar to batch mode
 +      but blocks for up to a configurable number of milliseconds between disk 
flushes.
 +    - nodetool clearsnapshot now required the --all flag to remove all 
snapshots.
 +      Previous behavior would delete all snapshots by default.
 +    - Nodes are now identified by a combination of IP, and storage port.
 +      Existing JMX APIs, nodetool, and system tables continue to work
 +      and accept/return just an IP, but there is a new
 +      version of each that works with the full unambiguous identifier.
 +      You should prefer these over the deprecated ambiguous versions that only
 +      work with an IP. This was done to support multiple instances per IP.
 +      Additionally we are moving to only using a single port for encrypted and
 +      unencrypted traffic and if you want multiple instances per IP you must
 +      first switch encrypted traffic to the storage port and not a separate
 +      encrypted port. If you want to use multiple instances per IP
 +      with SSL you will need to use StartTLS on storage_port and set
 +      outgoing_encrypted_port_source to gossip outbound connections
 +      know what port to connect to for each instance. Before changing
 +      storage port or native port at nodes you must first upgrade the entire 
cluster
 +      and clients to 4.0 so they can handle the port not being consistent 
across
 +      the cluster.
 +    - Names of AWS regions/availability zones have been cleaned up to more 
correctly
 +      match the Amazon names. There is now a new option in 
conf/cassandra-rackdc.properties
 +      that lets users enable the correct names for new clusters, or use the 
legacy
 +      names for existing clusters. See conf/cassandra-rackdc.properties for 
details.
 +    - Background repair has been removed. dclocal_read_repair_chance and
 +      read_repair_chance table options have been removed and are now rejected.
 +      See CASSANDRA-13910 for details.
 +    - Internode TCP connections that do not ack segments for 30s will now
 +      be automatically detected and closed via the Linux TCP_USER_TIMEOUT
 +      socket option. This should be exceedingly rare, but AWS networks (and
 +      other stateful firewalls) apparently suffer from this issue. You can
 +      tune the timeouts on TCP connection and segment ack via the
 +      `cassandra.yaml:internode_tcp_connect_timeout_in_ms` and
 +      `cassandra.yaml:internode_tcp_user_timeout_in_ms` options respectively.
 +      See CASSANDRA-14358 for details.
 +    - repair_session_space_in_mb setting has been added to cassandra.yaml to 
allow operators to reduce
 +      merkle tree size if repair is creating too much heap pressure. The 
repair_session_max_tree_depth
 +      setting added in 3.0.19 and 3.11.5 is deprecated in favor of this 
setting. See CASSANDRA-14096
 +    - The flags 'enable_materialized_views' and 'enable_sasi_indexes' in 
cassandra.yaml
 +      have been set as false by default. Operators should modify them to 
allow the
 +      creation of new views and SASI indexes, the existing ones will continue 
working.
 +      See CASSANDRA-14866 for details.
 +    - CASSANDRA-15216 - The flag 'cross_node_timeout' has been set as true by 
default.
 +      This change is done under the assumption that users have setup NTP on
 +      their clusters or otherwise synchronize their clocks, and that clocks 
are
 +      mostly in sync, since this is a requirement for general correctness of
 +      last write wins.
 +    - CASSANDRA-15257 removed the joda time dependency.  Any time formats
 +      passed will now need to conform to java.time.format.DateTimeFormatter.
 +      Most notably, days and months must be two digits, and years exceeding
 +      four digits need to be prefixed with a plus or minus sign.
 +    - cqlsh now returns a non-zero code in case of errors. This is a backward 
incompatible change so it may
 +      break existing scripts that rely on the current behavior. See 
CASSANDRA-15623 for more details.
 +
 +
++Deprecation
++-----------
++
++    - The JMX MBean org.apache.cassandra.db:type=BlacklistedDirectories has 
been
++      deprecated in favor of 
org.apache.cassandra.db:type=DisallowedDirectories
++      and will be removed in a subsequent major version.
++
++
 +Materialized Views
 +-------------------
 +    - Following a discussion regarding concerns about the design and safety 
of Materialized Views, the C* development
 +      community no longer recommends them for production use, and considers 
them experimental. Warnings messages will
 +      now be logged when they are created. (See 
https://www.mail-archive.com/[email protected]/msg11511.html)
 +    - An 'enable_materialized_views' flag has been added to cassandra.yaml to 
allow operators to prevent creation of
 +      views
 +    - CREATE MATERIALIZED VIEW syntax has become stricter. Partition key 
columns are no longer implicitly considered
 +      to be NOT NULL, and no base primary key columns get automatically 
included in view definition. You have to
 +      specify them explicitly now.
  
  3.11.6
  ======
diff --cc src/java/org/apache/cassandra/cql3/functions/UDFunction.java
index f98c75f,6928a06..f0fd8d9
--- a/src/java/org/apache/cassandra/cql3/functions/UDFunction.java
+++ b/src/java/org/apache/cassandra/cql3/functions/UDFunction.java
@@@ -96,8 -92,9 +96,8 @@@ public abstract class UDFunction extend
      // array is searched for a match. If a match is found, class-loader
      // rejects access. Otherwise the class/resource can be loaded.
      //
-     private static final String[] whitelistedPatterns =
+     private static final String[] allowedPatterns =
      {
 -    "com/datastax/driver/core/",
      "com/google/common/reflect/TypeToken",
      "java/io/IOException.class",
      "java/io/Serializable.class",
diff --cc src/java/org/apache/cassandra/db/Directories.java
index de5ab88,0ffa6be..bd4081f
--- a/src/java/org/apache/cassandra/db/Directories.java
+++ b/src/java/org/apache/cassandra/db/Directories.java
@@@ -348,42 -350,7 +348,42 @@@ public class Directorie
      }
  
      /**
 +     * Returns a data directory to load the file {@code sourceFile}. If the 
sourceFile is on same disk partition as any
 +     * data directory then use that one as data directory otherwise use 
{@link #getWriteableLocationAsFile(long)} to
 +     * find suitable data directory.
 +     *
-      * Also makes sure returned directory is non-blacklisted.
++     * Also makes sure returned directory is not disallowed.
 +     *
-      * @throws FSWriteError if all directories are blacklisted
++     * @throws FSWriteError if all directories are disallowed.
 +     */
 +    public File getWriteableLocationToLoadFile(final File sourceFile)
 +    {
 +        try
 +        {
 +            final FileStore srcFileStore = 
Files.getFileStore(sourceFile.toPath());
 +            for (final File dataPath : dataPaths)
 +            {
-                 if (BlacklistedDirectories.isUnwritable(dataPath))
++                if (DisallowedDirectories.isUnwritable(dataPath))
 +                {
 +                    continue;
 +                }
 +
 +                if 
(Files.getFileStore(dataPath.toPath()).equals(srcFileStore))
 +                {
 +                    return dataPath;
 +                }
 +            }
 +        }
 +        catch (final IOException e)
 +        {
 +            // pass exceptions in finding filestore. This is best effort 
anyway. Fall back on getWriteableLocationAsFile()
 +        }
 +
 +        return getWriteableLocationAsFile(sourceFile.length());
 +    }
 +
 +    /**
-      * Returns a temporary subdirectory on non-blacklisted data directory
+      * Returns a temporary subdirectory on allowed data directory
       * that _currently_ has {@code writeSize} bytes as usable space.
       * This method does not create the temporary directory.
       *
diff --cc 
src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 30b4cb8,5d43143..1e72ae5
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@@ -208,11 -213,50 +208,11 @@@ public abstract class AbstractCompactio
       */
      public abstract long getMaxSSTableBytes();
  
 -    public void enable()
 -    {
 -    }
 -
 -    public void disable()
 -    {
 -    }
 -
 -    /**
 -     * @return whether or not MeteredFlusher should be able to trigger 
memtable flushes for this CF.
 -     */
 -    public boolean isAffectedByMeteredFlusher()
 -    {
 -        return true;
 -    }
 -
 -    /**
 -     * If not affected by MeteredFlusher (and handling flushing on its own), 
override to tell MF how much
 -     * space to reserve for this CF, i.e., how much space to subtract from 
`memtable_total_space_in_mb` when deciding
 -     * if other memtables should be flushed or not.
 -     */
 -    public long getMemtableReservedSize()
 -    {
 -        return 0;
 -    }
 -
 -    /**
 -     * Handle a flushed memtable.
 -     *
 -     * @param memtable the flushed memtable
 -     * @param sstables the written sstables. can be null or empty if the 
memtable was clean.
 -     */
 -    public void replaceFlushed(Memtable memtable, Collection<SSTableReader> 
sstables)
 -    {
 -        cfs.getTracker().replaceFlushed(memtable, sstables);
 -        if (sstables != null && !sstables.isEmpty())
 -            CompactionManager.instance.submitBackground(cfs);
 -    }
 -
      /**
-      * Filters SSTables that are to be blacklisted from the given collection
+      * Filters SSTables that are to be excluded from the given collection
       *
-      * @param originalCandidates The collection to check for blacklisted 
SSTables
-      * @return list of the SSTables with blacklisted ones filtered out
+      * @param originalCandidates The collection to check for excluded SSTables
+      * @return list of the SSTables with excluded ones filtered out
       */
      public static List<SSTableReader> 
filterSuspectSSTables(Iterable<SSTableReader> originalCandidates)
      {
diff --cc src/java/org/apache/cassandra/dht/RangeStreamer.java
index 75e2530,a3cc996..75ccb4b
--- a/src/java/org/apache/cassandra/dht/RangeStreamer.java
+++ b/src/java/org/apache/cassandra/dht/RangeStreamer.java
@@@ -216,24 -125,18 +216,24 @@@ public class RangeStreame
      /**
       * Source filter which only includes endpoints contained within a 
provided set.
       */
-     public static class WhitelistedSourcesFilter implements SourceFilter
 -    public static class AllowedSourcesFilter implements ISourceFilter
++    public static class AllowedSourcesFilter implements SourceFilter
      {
-         private final Set<InetAddressAndPort> whitelistedSources;
 -        private final Set<InetAddress> allowedSources;
++        private final Set<InetAddressAndPort> allowedSources;
  
-         public WhitelistedSourcesFilter(Set<InetAddressAndPort> 
whitelistedSources)
 -        public AllowedSourcesFilter(Set<InetAddress> allowedSources)
++        public AllowedSourcesFilter(Set<InetAddressAndPort> allowedSources)
          {
-             this.whitelistedSources = whitelistedSources;
+             this.allowedSources = allowedSources;
          }
  
 -        public boolean shouldInclude(InetAddress endpoint)
 +        public boolean apply(Replica replica)
 +        {
-             return whitelistedSources.contains(replica.endpoint());
++            return allowedSources.contains(replica.endpoint());
 +        }
 +
 +        @Override
 +        public String message(Replica replica)
          {
-             return "Filtered " + replica + " out because it was not 
whitelisted, whitelisted sources: " + whitelistedSources;
 -            return allowedSources.contains(endpoint);
++            return "Filtered " + replica + " out because it was not in the 
allowed set: " + allowedSources;
          }
      }
  
diff --cc src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java
index 2d9fd9d,c562dd0..e71ae07
--- a/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java
+++ b/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java
@@@ -240,9 -240,9 +240,9 @@@ final class HintsDispatchExecuto
                  }
                  catch (FSReadError e)
                  {
 -                    logger.error("Failed to dispatch hints file {}: file is 
corrupted ({})", descriptor.fileName(), e);
 +                    logger.error(String.format("Failed to dispatch hints file 
%s: file is corrupted", descriptor.fileName()), e);
                      store.cleanUp(descriptor);
-                     store.blacklist(descriptor);
+                     store.markCorrupted(descriptor);
                      throw e;
                  }
              }
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 428dac1,3d31596..01aa47a
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -1256,10 -1293,10 +1256,10 @@@ public class StorageService extends Not
                              throw new IllegalArgumentException("Unknown host 
specified " + stringHost, ex);
                          }
                      }
-                     streamer.addSourceFilter(new 
RangeStreamer.WhitelistedSourcesFilter(sources));
+                     streamer.addSourceFilter(new 
RangeStreamer.AllowedSourcesFilter(sources));
                  }
  
 -                streamer.addRanges(keyspace, ranges);
 +                streamer.addRanges(keyspace, streamRanges.build());
              }
  
              StreamResultFuture resultFuture = streamer.fetchAsync();
diff --cc test/unit/org/apache/cassandra/db/DiskBoundaryManagerTest.java
index 6e2a714,febcfeb..3cd501e
--- a/test/unit/org/apache/cassandra/db/DiskBoundaryManagerTest.java
+++ b/test/unit/org/apache/cassandra/db/DiskBoundaryManagerTest.java
@@@ -49,11 -49,11 +49,11 @@@ public class DiskBoundaryManagerTest ex
      @Before
      public void setup()
      {
-         BlacklistedDirectories.clearUnwritableUnsafe();
+         DisallowedDirectories.clearUnwritableUnsafe();
          TokenMetadata metadata = StorageService.instance.getTokenMetadata();
 -        metadata.updateNormalTokens(BootStrapper.getRandomTokens(metadata, 
10), FBUtilities.getBroadcastAddress());
 +        metadata.updateNormalTokens(BootStrapper.getRandomTokens(metadata, 
10), FBUtilities.getBroadcastAddressAndPort());
          createTable("create table %s (id int primary key, x text)");
 -        dirs = new Directories(getCurrentColumnFamilyStore().metadata, 
Lists.newArrayList(new Directories.DataDirectory(new File("/tmp/1")),
 +        dirs = new Directories(getCurrentColumnFamilyStore().metadata(), 
Lists.newArrayList(new Directories.DataDirectory(new File("/tmp/1")),
                                                                                
            new Directories.DataDirectory(new File("/tmp/2")),
                                                                                
            new Directories.DataDirectory(new File("/tmp/3"))));
          mock = new MockCFS(getCurrentColumnFamilyStore(), dirs);


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to