[jira] [Updated] (CASSANDRA-16027) bin/cassandra may fail to execute in certain environments due to unhandled output pipe

2020-08-04 Thread Christopher Bradford (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Bradford updated CASSANDRA-16027:
-
Description: 
While developing Docker images for C* 3.11.6 I noticed bin/cassandra failed to 
start when executed with nohup in a container based Red Hat Universal Base 
Image on Kubernetes. Curiously this issue could not be reproduced when simply 
running the container on Docker. Instead it had to be scheduled within a k8s 
pod.

 

Running an strace indicates the following:
{code:java}
[pid 3666026] write(2, "OpenJDK 64-Bit Server VM warning: ", 34) = -1 EPIPE 
(Broken pipe) {code}
 

The issue occurs before the system.log file is created. After further digging 
in to the command being run it was determined the bin/cassandra file was not 
redirecting all output pipes. In this particular environment this leads to the 
process going defunct and no Cassandra running.

 

The 4 lines starting with 
{code:java}
exec $NUMACTL {code}
are not handling one of the outputs (stderr or stdout). As a workaround 
suffixing each line with 
{code:java}
> /var/log/cassandra/stdout.log 2> /var/log/cassandra/stderr.log
{code}
{{resolves the issue and Cassandra starts without any issues.}}

 

{{Rather than bake this into our containers the fix should make its way 
upstream.}}

  was:
While developing Docker images for C* 3.11.6 I noticed bin/cassandra failed to 
start when executed with nohup in a container based Red Hat Universal Base 
Image on Kubernetes. Curiously this issue could not be reproduced when simply 
running the container on Docker. Instead it had to be scheduled within a k8s 
pod.

 

Running an strace indicates the following:
{code:java}
[pid 3666026] write(2, "OpenJDK 64-Bit Server VM warning: ", 34) = -1 EPIPE 
(Broken pipe) {code}
 

The issue occurs before the system.log file is created. After further digging 
in to the command being run it was determined the bin/cassandra file was not 
redirecting all output pipes. In this particular environment this leads to the 
process going defunct and no Cassandra running.

 

{{The 4 lines starting with }}
{code:java}
exec $NUMACTL {code}
{{are not handling one of the outputs (stderr or stdout). As a workaround 
suffixing each line with }}
{code:java}
> /var/log/cassandra/stdout.log 2> /var/log/cassandra/stderr.log
{code}
{{resolves the issue and Cassandra starts without any issues.}}

 

{{Rather than bake this into our containers the fix should make its way 
upstream.}}


> bin/cassandra may fail to execute in certain environments due to unhandled 
> output pipe
> --
>
> Key: CASSANDRA-16027
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16027
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Christopher Bradford
>Priority: Normal
>
> While developing Docker images for C* 3.11.6 I noticed bin/cassandra failed 
> to start when executed with nohup in a container based Red Hat Universal Base 
> Image on Kubernetes. Curiously this issue could not be reproduced when simply 
> running the container on Docker. Instead it had to be scheduled within a k8s 
> pod.
>  
> Running an strace indicates the following:
> {code:java}
> [pid 3666026] write(2, "OpenJDK 64-Bit Server VM warning: ", 34) = -1 EPIPE 
> (Broken pipe) {code}
>  
> The issue occurs before the system.log file is created. After further digging 
> in to the command being run it was determined the bin/cassandra file was not 
> redirecting all output pipes. In this particular environment this leads to 
> the process going defunct and no Cassandra running.
>  
> The 4 lines starting with 
> {code:java}
> exec $NUMACTL {code}
> are not handling one of the outputs (stderr or stdout). As a workaround 
> suffixing each line with 
> {code:java}
> > /var/log/cassandra/stdout.log 2> /var/log/cassandra/stderr.log
> {code}
> {{resolves the issue and Cassandra starts without any issues.}}
>  
> {{Rather than bake this into our containers the fix should make its way 
> upstream.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-16027) bin/cassandra may fail to execute in certain environments due to unhandled output pipe

2020-08-04 Thread Christopher Bradford (Jira)
Christopher Bradford created CASSANDRA-16027:


 Summary: bin/cassandra may fail to execute in certain environments 
due to unhandled output pipe
 Key: CASSANDRA-16027
 URL: https://issues.apache.org/jira/browse/CASSANDRA-16027
 Project: Cassandra
  Issue Type: Bug
Reporter: Christopher Bradford


While developing Docker images for C* 3.11.6 I noticed bin/cassandra failed to 
start when executed with nohup in a container based Red Hat Universal Base 
Image on Kubernetes. Curiously this issue could not be reproduced when simply 
running the container on Docker. Instead it had to be scheduled within a k8s 
pod.

 

Running an strace indicates the following:
{code:java}
[pid 3666026] write(2, "OpenJDK 64-Bit Server VM warning: ", 34) = -1 EPIPE 
(Broken pipe) {code}
 

The issue occurs before the system.log file is created. After further digging 
in to the command being run it was determined the bin/cassandra file was not 
redirecting all output pipes. In this particular environment this leads to the 
process going defunct and no Cassandra running.

 

{{The 4 lines starting with }}
{code:java}
exec $NUMACTL {code}
{{are not handling one of the outputs (stderr or stdout). As a workaround 
suffixing each line with }}
{code:java}
> /var/log/cassandra/stdout.log 2> /var/log/cassandra/stderr.log
{code}
{{resolves the issue and Cassandra starts without any issues.}}

 

{{Rather than bake this into our containers the fix should make its way 
upstream.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15823) Support for networking via identity instead of IP

2020-05-20 Thread Christopher Bradford (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112377#comment-17112377
 ] 

Christopher Bradford commented on CASSANDRA-15823:
--

Isn’t host id a uuid? I’m not suggesting we remove that part of the equation. 
The goal of this ticket is to support address identifiers other than IP. 

DNS availability and resilience should be weighed by the users on a per 
environment basis. If the user prefers configuration by hostname and believe 
their DNS is HA and resilient enough they should be able to opt in here. 

Now if we’re talking about validating a request was received and processed by a 
specific host id. IMHO that’s a different change / ticket. 

Do we have anything in our protocols / RPC that validates a request that’s 
received? IE if a coordinator sends a request to an IP that’s been swapped, 
does the receiving node validate that the request belongs to it? Maybe more 
succinctly, are host ids sent along with each request for validation purposes?

> Support for networking via identity instead of IP
> -
>
> Key: CASSANDRA-15823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15823
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Christopher Bradford
>Priority: Normal
> Attachments: consul-mesh-gateways.png, 
> istio-multicluster-with-gateways.svg, linkerd-service-mirroring.svg
>
>
> TL;DR: Instead of mapping host ids to IPs, use hostnames. This allows 
> resolution to different IP addresses per DC that may then be forwarded to 
> nodes on remote networks without requiring node to node IP connectivity for 
> cross-dc links.
>  
> This approach should not affect existing deployments as those could continue 
> to use IPs as the hostname and skip resolution.
> 
> With orchestration platforms like Kubernetes and the usage of ephemeral 
> containers in environments today we should consider some changes to how we 
> handle the tracking of nodes and their network location. Currently we 
> maintain a mapping between host ids and IP addresses.
>  
> With traditional infrastructure, if a node goes down it, usually, comes back 
> up with the same IP. In some environments this contract may be explicit with 
> virtual IPs that may move between hosts. In newer deployments, like on 
> Kubernetes, this contract is not possible. Pods (analogous to nodes) are 
> assigned an IP address at start time. Should the pod be restarted or 
> scheduled on a different host there is no guarantee we would have the same 
> IP. Cassandra is protected here as we already have logic in place to update 
> peers when we come up with the same host id, but a different IP address.
>  
> There are ways to get Kubernetes to assign a specific IP per Pod. Most 
> recommendations involve the use of a service per pod. Communication with the 
> fixed service IP would automatically forward to the associated pod, 
> regardless of address. We _could_ use this approach, but it seems like this 
> would needlessly create a number of extra resources in our k8s cluster to get 
> around the problem. Which, to be fair, doesn't seem like much of a problem 
> with the aforementioned mitigations built into C*.
>  
> So what is the _actual_ problem? *Cross-region, cross-cloud, 
> hybrid-deployment connectivity between pods is a pain.* This can be solved 
> with significant investment by those who want to deploy these types of 
> topologies. You can definitely configure connectivity between clouds over 
> dedicated connections, or VPN tunnels. With a big chunk of time insuring that 
> pod to pod connectivity just works even if those pods are managed by separate 
> control planes, but that again requires time and talent. There are a number 
> of edge cases to support between the ever so slight, but very important, 
> differences in cloud vendor networks.
>  
> Recently there have been a number of innovations that aid in the deployment 
> and operation of these types of applications on Kubernetes. Service meshes 
> support distributed microservices running across multiple k8s cluster control 
> planes in disparate networks. Instead of directly connecting to IP addresses 
> of remote services instead they use a hostname. With this approach, hostname 
> traffic may then be routed to a proxy that sends traffic over the WAN 
> (sometimes with mTLS) to another proxy pod in the remote cluster which then 
> forwards the data along to the correct pod in that network. (See attached 
> diagrams)
>  
> Which brings us to the point of this ticket. Instead of mapping host ids to 
> IPs, use hostnames (and update the underlying address periodically instead of 
> caching indefinitely). This allows resolution to different IP addresses per 
> DC (k8s cluster) that may then be forwarded to nodes (pods) 

[jira] [Updated] (CASSANDRA-15823) Support for networking via identity instead of IP

2020-05-19 Thread Christopher Bradford (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Bradford updated CASSANDRA-15823:
-
Attachment: consul-mesh-gateways.png
linkerd-service-mirroring.svg
istio-multicluster-with-gateways.svg

> Support for networking via identity instead of IP
> -
>
> Key: CASSANDRA-15823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15823
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Christopher Bradford
>Priority: Normal
> Attachments: consul-mesh-gateways.png, 
> istio-multicluster-with-gateways.svg, linkerd-service-mirroring.svg
>
>
> TL;DR: Instead of mapping host ids to IPs, use hostnames. This allows 
> resolution to different IP addresses per DC that may then be forwarded to 
> nodes on remote networks without requiring node to node IP connectivity for 
> cross-dc links.
>  
> This approach should not affect existing deployments as those could continue 
> to use IPs as the hostname and skip resolution.
> 
> With orchestration platforms like Kubernetes and the usage of ephemeral 
> containers in environments today we should consider some changes to how we 
> handle the tracking of nodes and their network location. Currently we 
> maintain a mapping between host ids and IP addresses.
>  
> With traditional infrastructure, if a node goes down it, usually, comes back 
> up with the same IP. In some environments this contract may be explicit with 
> virtual IPs that may move between hosts. In newer deployments, like on 
> Kubernetes, this contract is not possible. Pods (analogous to nodes) are 
> assigned an IP address at start time. Should the pod be restarted or 
> scheduled on a different host there is no guarantee we would have the same 
> IP. Cassandra is protected here as we already have logic in place to update 
> peers when we come up with the same host id, but a different IP address.
>  
> There are ways to get Kubernetes to assign a specific IP per Pod. Most 
> recommendations involve the use of a service per pod. Communication with the 
> fixed service IP would automatically forward to the associated pod, 
> regardless of address. We _could_ use this approach, but it seems like this 
> would needlessly create a number of extra resources in our k8s cluster to get 
> around the problem. Which, to be fair, doesn't seem like much of a problem 
> with the aforementioned mitigations built into C*.
>  
> So what is the _actual_ problem? *Cross-region, cross-cloud, 
> hybrid-deployment connectivity between pods is a pain.* This can be solved 
> with significant investment by those who want to deploy these types of 
> topologies. You can definitely configure connectivity between clouds over 
> dedicated connections, or VPN tunnels. With a big chunk of time insuring that 
> pod to pod connectivity just works even if those pods are managed by separate 
> control planes, but that again requires time and talent. There are a number 
> of edge cases to support between the ever so slight, but very important, 
> differences in cloud vendor networks.
>  
> Recently there have been a number of innovations that aid in the deployment 
> and operation of these types of applications on Kubernetes. Service meshes 
> support distributed microservices running across multiple k8s cluster control 
> planes in disparate networks. Instead of directly connecting to IP addresses 
> of remote services instead they use a hostname. With this approach, hostname 
> traffic may then be routed to a proxy that sends traffic over the WAN 
> (sometimes with mTLS) to another proxy pod in the remote cluster which then 
> forwards the data along to the correct pod in that network. (See attached 
> diagrams)
>  
> Which brings us to the point of this ticket. Instead of mapping host ids to 
> IPs, use hostnames (and update the underlying address periodically instead of 
> caching indefinitely). This allows resolution to different IP addresses per 
> DC (k8s cluster) that may then be forwarded to nodes (pods) on remote 
> networks (k8s clusters) without requiring node to node (pod to pod) IP 
> connectivity between them. Traditional deployments can still function like 
> they do today (even if operators opt to keep using IPs as identifiers instead 
> of hostnames). This proxy approach is then enabled like those we see in 
> service meshes.
>  
> _Notes_
> C* already has the concept of broadcast addresses vs those which are bound on 
> the node. This approach _could_ be leveraged to provide the behavior we're 
> looking for, but then the broadcast values would need to be pre-computed 
> _*and match*_ across all k8s control planes. By using hostnames the 
> underlying IP address does not matter and will most likely be different in 

[jira] [Created] (CASSANDRA-15823) Support for networking via identity instead of IP

2020-05-19 Thread Christopher Bradford (Jira)
Christopher Bradford created CASSANDRA-15823:


 Summary: Support for networking via identity instead of IP
 Key: CASSANDRA-15823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15823
 Project: Cassandra
  Issue Type: Improvement
Reporter: Christopher Bradford


TL;DR: Instead of mapping host ids to IPs, use hostnames. This allows 
resolution to different IP addresses per DC that may then be forwarded to nodes 
on remote networks without requiring node to node IP connectivity for cross-dc 
links.

 

This approach should not affect existing deployments as those could continue to 
use IPs as the hostname and skip resolution.

With orchestration platforms like Kubernetes and the usage of ephemeral 
containers in environments today we should consider some changes to how we 
handle the tracking of nodes and their network location. Currently we maintain 
a mapping between host ids and IP addresses.

 

With traditional infrastructure, if a node goes down it, usually, comes back up 
with the same IP. In some environments this contract may be explicit with 
virtual IPs that may move between hosts. In newer deployments, like on 
Kubernetes, this contract is not possible. Pods (analogous to nodes) are 
assigned an IP address at start time. Should the pod be restarted or scheduled 
on a different host there is no guarantee we would have the same IP. Cassandra 
is protected here as we already have logic in place to update peers when we 
come up with the same host id, but a different IP address.

 

There are ways to get Kubernetes to assign a specific IP per Pod. Most 
recommendations involve the use of a service per pod. Communication with the 
fixed service IP would automatically forward to the associated pod, regardless 
of address. We _could_ use this approach, but it seems like this would 
needlessly create a number of extra resources in our k8s cluster to get around 
the problem. Which, to be fair, doesn't seem like much of a problem with the 
aforementioned mitigations built into C*.

 

So what is the _actual_ problem? *Cross-region, cross-cloud, hybrid-deployment 
connectivity between pods is a pain.* This can be solved with significant 
investment by those who want to deploy these types of topologies. You can 
definitely configure connectivity between clouds over dedicated connections, or 
VPN tunnels. With a big chunk of time insuring that pod to pod connectivity 
just works even if those pods are managed by separate control planes, but that 
again requires time and talent. There are a number of edge cases to support 
between the ever so slight, but very important, differences in cloud vendor 
networks.

 

Recently there have been a number of innovations that aid in the deployment and 
operation of these types of applications on Kubernetes. Service meshes support 
distributed microservices running across multiple k8s cluster control planes in 
disparate networks. Instead of directly connecting to IP addresses of remote 
services instead they use a hostname. With this approach, hostname traffic may 
then be routed to a proxy that sends traffic over the WAN (sometimes with mTLS) 
to another proxy pod in the remote cluster which then forwards the data along 
to the correct pod in that network. (See attached diagrams)

 

Which brings us to the point of this ticket. Instead of mapping host ids to 
IPs, use hostnames (and update the underlying address periodically instead of 
caching indefinitely). This allows resolution to different IP addresses per DC 
(k8s cluster) that may then be forwarded to nodes (pods) on remote networks 
(k8s clusters) without requiring node to node (pod to pod) IP connectivity 
between them. Traditional deployments can still function like they do today 
(even if operators opt to keep using IPs as identifiers instead of hostnames). 
This proxy approach is then enabled like those we see in service meshes.

 

_Notes_

C* already has the concept of broadcast addresses vs those which are bound on 
the node. This approach _could_ be leveraged to provide the behavior we're 
looking for, but then the broadcast values would need to be pre-computed _*and 
match*_ across all k8s control planes. By using hostnames the underlying IP 
address does not matter and will most likely be different in each cluster.

 

I recognize the title may be a bit misleading as we would obviously still 
communicate over TCP/IP., but it concisely conveys the point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9191) Log and count failure to obtain requested consistency

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554578#comment-15554578
 ] 

Christopher Bradford commented on CASSANDRA-9191:
-

Do we only want the query displayed in the debug log or the tracing as well? 
This is pulled from the read path, are you looking for this message when the 
requested CL is not achieved during a write operation?

> Log and count failure to obtain requested consistency
> -
>
> Key: CASSANDRA-9191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9191
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Priority: Minor
>  Labels: lhf
>
> Cassandra should have a way to log failed requests due to failure to obtain 
> requested consistency. This should be logged as error or warning by default. 
> Also exposed should be a counter for the benefit of opscenter. 
> Currently the only way to log this is at the client. Often the application 
> and DB teams are separate and it's very difficult to obtain client logs. Also 
> because it's only visible to the client no visibility is given to opscenter 
> making it difficult for the field to track down or isolate systematic or node 
> level errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9191) Log and count failure to obtain requested consistency

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554578#comment-15554578
 ] 

Christopher Bradford edited comment on CASSANDRA-9191 at 10/7/16 8:59 AM:
--

Do we only want the query displayed in the debug log or the tracing as well? 
This snippet is pulled from the read path, are you looking for this message 
when the requested CL is not achieved during a write operation?


was (Author: bradfordcp):
Do we only want the query displayed in the debug log or the tracing as well? 
This is pulled from the read path, are you looking for this message when the 
requested CL is not achieved during a write operation?

> Log and count failure to obtain requested consistency
> -
>
> Key: CASSANDRA-9191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9191
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Priority: Minor
>  Labels: lhf
>
> Cassandra should have a way to log failed requests due to failure to obtain 
> requested consistency. This should be logged as error or warning by default. 
> Also exposed should be a counter for the benefit of opscenter. 
> Currently the only way to log this is at the client. Often the application 
> and DB teams are separate and it's very difficult to obtain client logs. Also 
> because it's only visible to the client no visibility is given to opscenter 
> making it difficult for the field to track down or isolate systematic or node 
> level errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-07 Thread Christopher Bradford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15554425#comment-15554425
 ] 

Christopher Bradford commented on CASSANDRA-12701:
--

Is it better to create a patch or push a PR to Github?

> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-06 Thread Christopher Bradford (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Bradford updated CASSANDRA-12701:
-
Attachment: CASSANDRA-12701.txt

> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12701) Repair history tables should have TTL and TWCS

2016-10-06 Thread Christopher Bradford (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Bradford updated CASSANDRA-12701:
-
Status: Patch Available  (was: Open)

> Repair history tables should have TTL and TWCS
> --
>
> Key: CASSANDRA-12701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12701
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>  Labels: lhf
> Attachments: CASSANDRA-12701.txt
>
>
> Some tools schedule a lot of small subrange repairs which can lead to a lot 
> of repairs constantly being run. These partitions can grow pretty big in 
> theory. I dont think much reads from them which might help but its still 
> kinda wasted disk space. I think a month TTL (longer than gc grace) and maybe 
> a 1 day twcs window makes sense to me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)