I don’t think that was implemented, but Alan would know.

Now, doing something like this gets complicated (or maybe even really, really 
difficult), because ATS wants to allocate the cache entry *before* going to the 
origin server. Which means, you can’t know the size of the object until you go 
to origin. This would imply that you would have to do a HEAD request (or 
something) first, to get the size of the object, and after that, select the 
storage unit and then go to origin again with the GET request.

— Leif


> On Apr 10, 2023, at 10:48 AM, Veiko Kukk <veiko.k...@gmail.com> wrote:
> 
> Sharding based on domain won't work for us. We use OVH Swift as
> backend and there is no option to redistribute based on size or domain
> name.
> I googled around before writing my first letter and was happy to find
> this 
> https://issues.apache.org/jira/browse/TS-1728?focusedCommentId=13635926&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13635926
> 
> It's exactly what we would need. Did it not get implemented 10 years ago?
> 
> I believe our situation is not unique and the need to distribute based
> on storage type and object size would be quite common, considering SSD
> and HDD differences. SSD-s are still too small to have proper CDN node
> cache capacity and HDD-s are too slow in seeking for small files.
> 
> Veiko
> 
> 
> Kontakt Leif Hedstrom (<zw...@apache.org>) kirjutas kuupäeval E, 10.
> aprill 2023 kell 19:23:
>> 
>> I don’t think you can have such control, at least not now. I think the best 
>> you could do is to shard your content (small vs large) into two (or more) 
>> domains, and then you can assign volumes based on those names.
>> 
>> — Leif
>> 
>> From hosting.config:
>> 
>> 
>> #   Primary destination specifiers are
>> #     domain=
>> #     hostname=
>> 
>> 
>> 
>> 
>>> On Apr 10, 2023, at 5:51 AM, Veiko Kukk <veiko.k...@gmail.com> wrote:
>>> 
>>> Hi
>>> 
>>> We are currently using Nginx in front of ATS to store hot content on
>>> NVMe-s, but would like to drop it and only use ATS. ATS is using full
>>> SATA HDD-s to store its content (about 150TB per node) and ATS RAM
>>> cache is disabled entirely.
>>> 
>>> From reading ATS documentation, I only found how to enable RAM cache
>>> for objects smaller than x, but nothing about how to create volume for
>>> smaller files from actual storage devices. The idea here is that HDD-s
>>> are more suitable (better performing) for larger objects that are read
>>> sequentially and SSD-s for smaller files because seek penalty isn't as
>>> high with solid state drives as it is with rotating media.
>>> 
>>> How could I create volume over NVMe drivers that would only store
>>> smaller than x size files?
>>> 
>>> Thanks ahead,
>>> Veiko
>> 

Reply via email to