Arsnael commented on code in PR #1701:
URL: https://github.com/apache/james-project/pull/1701#discussion_r1306866327


##########
server/data/data-jmap-cassandra/src/main/java/org/apache/james/jmap/cassandra/upload/CassandraUploadRepository.java:
##########
@@ -43,6 +44,9 @@
 import reactor.core.publisher.Mono;
 
 public class CassandraUploadRepository implements UploadRepository {
+
+    public static final BucketName UPLOAD_BUCKET = BucketName.of("uploads_2");

Review Comment:
   Can't find a better name than that for a bucket? `jmap_uploads` for example?



##########
server/data/data-jmap-cassandra/src/main/java/org/apache/james/jmap/cassandra/upload/CassandraUploadRepository.java:
##########
@@ -43,56 +44,56 @@
 import reactor.core.publisher.Mono;
 
 public class CassandraUploadRepository implements UploadRepository {
+
+    public static final BucketName UPLOAD_BUCKET = BucketName.of("uploads_2");
+    public static final Duration EXPIRE_DURATION = Duration.ofDays(7);
     private final UploadDAO uploadDAO;
     private final BlobStore blobStore;
-    private final BucketNameGenerator bucketNameGenerator;
     private final Clock clock;
 
     @Inject
-    public CassandraUploadRepository(UploadDAO uploadDAO, BlobStore blobStore, 
BucketNameGenerator bucketNameGenerator, Clock clock) {
+    public CassandraUploadRepository(UploadDAO uploadDAO, BlobStore blobStore, 
Clock clock) {
         this.uploadDAO = uploadDAO;
         this.blobStore = blobStore;
-        this.bucketNameGenerator = bucketNameGenerator;
         this.clock = clock;
     }
 
     @Override
-    public Publisher<UploadMetaData> upload(InputStream data, ContentType 
contentType, Username user) {
+    public Mono<UploadMetaData> upload(InputStream data, ContentType 
contentType, Username user) {
         UploadId uploadId = generateId();
-        UploadBucketName uploadBucketName = bucketNameGenerator.current();
-        BucketName bucketName = uploadBucketName.asBucketName();
 
         return Mono.fromCallable(() -> new CountingInputStream(data))
-            .flatMap(countingInputStream -> 
Mono.from(blobStore.save(bucketName, countingInputStream, LOW_COST))
-                .map(blobId -> new UploadDAO.UploadRepresentation(uploadId, 
bucketName, blobId, contentType, countingInputStream.getCount(), user, 
clock.instant()))
+            .flatMap(countingInputStream -> 
Mono.from(blobStore.save(UPLOAD_BUCKET, countingInputStream, LOW_COST))
+                .map(blobId -> new UploadDAO.UploadRepresentation(uploadId, 
blobId, contentType, countingInputStream.getCount(), user, clock.instant()))
                 .flatMap(upload -> uploadDAO.save(upload)
                     .thenReturn(upload.toUploadMetaData())));
     }
 
     @Override
-    public Publisher<Upload> retrieve(UploadId id, Username user) {
+    public Mono<Upload> retrieve(UploadId id, Username user) {
         return uploadDAO.retrieve(user, id)
             .map(upload -> Upload.from(upload.toUploadMetaData(),
-                () -> blobStore.read(upload.getBucketName(), 
upload.getBlobId(), LOW_COST)))
+                () -> blobStore.read(UPLOAD_BUCKET, upload.getBlobId(), 
LOW_COST)))
             .switchIfEmpty(Mono.error(() -> new UploadNotFoundException(id)));
     }
 
     @Override
-    public Publisher<Void> delete(UploadId id, Username user) {
+    public Mono<Void> delete(UploadId id, Username user) {
         return uploadDAO.delete(user, id);
     }
 
     @Override
-    public Publisher<UploadMetaData> listUploads(Username user) {
+    public Flux<UploadMetaData> listUploads(Username user) {
         return uploadDAO.list(user)
             .map(UploadDAO.UploadRepresentation::toUploadMetaData);
     }
 
     public Mono<Void> purge() {
-        return Flux.from(blobStore.listBuckets())
-            .<UploadBucketName>handle((bucketName, sink) -> 
UploadBucketName.ofBucket(bucketName).ifPresentOrElse(sink::next, 
sink::complete))
-            .filter(bucketNameGenerator.evictionPredicate())
-            .concatMap(bucket -> blobStore.deleteBucket(bucket.asBucketName()))
+        Instant sevenDaysAgo = clock.instant().minus(EXPIRE_DURATION);
+        return Flux.from(uploadDAO.all())
+            .filter(upload -> upload.getUploadDate().isBefore(sevenDaysAgo))

Review Comment:
   From what I understood when grooming, I thought the purge was supposed to 
take out maybe 50% of user's JMAP uploads when running it, purging the oldest 
uploads based on upload_date? (conclusion being, when hitting the quota limit 
for the user, we cleanup 50% of its oldest objects in the store)
   
   I hardly see how we would limit quota here if we don't purge anything 
younger than 7 days... People could still abuse the object store within that 7 
days period, exploding their quota, correct? I don't understand how is this 
logic much different than the current one in place



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to