On Thursday, September 6, 2018 at 11:34:00 PM UTC-4, Tim Hockin wrote:
>
>
>
> On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah <nas...@transit.app> wrote:
>
>> I do not think you have to understand what you are asking for, I've 
>> learned a lot by asking questions I only half understood :) With that said 
>> autoscaling sts was a question and not a feature request :)
>>
>
> LOL, fair enough
>
>
>> I do not see how "data is important, and needs to be preserved" and "pods 
>> (compute) have no identity" are mutually incompatible statements but if you 
>> say so. :)
>>
>
> Think of it this way - if the deployment scales up, clearly we should add 
> more volumes.  What do I do if it scales down?  Delete the volumes?  Hold 
> on to them for some later scale-up?  For how long?  How many volumes?   
>
> Fundamentally, the persistent volume abstraction is wrong for what you 
> want here.  We have talked about something like volume pools which would be 
> the storage equivalent of deployments, but we have found very few use cases 
> where that seems to be the best abstraction.
>
> E.g. In this case, the data seems to be some sort of cache of recreatable 
> data.  Maybe you really want a cache?
>
>>
>
Now that I understood! 
Maybe I want do want a cache of sorts? 
Basically a bunch of objects are downloaded based on a query. The pods 
continue to listen to a queue of sorts and if something changes a different 
object may be dowloaded, another that is no longer needed be deleted and so 
on, this is what i call the "diff". It's that initial download that takes 
2-3 minutes.
 

> In any case the data is more or less important since it is fetchable, its 
>> just that if its already there when a new pod is spun up, or when a 
>> deployment is updated and a new pod created, it speeds up the startup time 
>> drastically (virtually immediate vs 2-3 minutes to sync)
>>
>
> Do you need all the data right away or can it be copied in on-demand? 
>

Need all the data (objects) as they should be at all times. it has to match 
what was queried/continues to be listened to. 

>  
>
> Shared storage would be ideal. (deployement with hpa, with a mounted nfs 
>> vol) But I get OOM errors when using RWX persistent volume and +1 pods are 
>> syncing the same data to that volume at the same time, i do not know why 
>> these OOM errors occur.  But maybe that has something to do with the code 
>> running that syncs the data. RWX seems to be a recurring challenge.
>>
>
> RWX is challenging because block devices generally don't support it at 
> all, and the only mainstream FS that does is NFS, and well... NFS...
>

I see... and yes amen to that! 

>
>
>>
>> On Thursday, September 6, 2018 at 4:33:18 PM UTC-4, Tim Hockin wrote:
>>>
>>> You have to understand what you are asking for.  You're saying "this 
>>> data is important and needs to be preserved beyond any one pod (a 
>>> persistent volume)" but you're also saying "the pods have no identity 
>>> because they can scale horizontally".  These are mutually incompatible 
>>> statements. 
>>>
>>> You really want a shared storage API, not volumes... 
>>> On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah <nas...@transit.app> wrote: 
>>> > 
>>> > I see I see.. what about autoscaling statefulsets with an HPA? 
>>> > 
>>> > > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user 
>>> discussion and Q&A <kubernet...@googlegroups.com> wrote: 
>>> > > 
>>> > > Deployments and PersistentVolumes are generally not a good 
>>> > > combination.  This is what StatefulSets are for. 
>>> > > 
>>> > > There's work happening to allow creation of a volume from a 
>>> snapshot, 
>>> > > but it's only Alpha in the next release. 
>>> > > On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah <nas...@transit.app> 
>>> wrote: 
>>> > >> 
>>> > >> Hello, 
>>> > >> 
>>> > >> I have a similar use case to Montassar. 
>>> > >> 
>>> > >> Although I could use emptyDirs, each newly spun pod takes 2-3 
>>> minutes to download required data(pod does something similar to git-sync). 
>>> If volumes could be prepopulated when a new pod is spun it will simply sync 
>>> the diff, which will drastically reduce startup readiness time. 
>>> > >> 
>>> > >> Any suggestions? Now I have a tradeoff between creating a static 
>>> number of replicas and creating same number of PVCs , or using HPA but 
>>> emptyDir volume which increases startup time for the pod. 
>>> > >> 
>>> > >> Thanks, 
>>> > >> Naseem 
>>> > >> 
>>> > >> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi 
>>> wrote: 
>>> > >>> 
>>> > >>> Hello!! 
>>> > >>> 
>>> > >>> I'm using Kubernetes deployment with persistent volume to run my 
>>> application, but when I try to add more replicas or autoscale, all the new 
>>> pods try to connect to the same volume. 
>>> > >>> How can I simultaneously auto create new volumes for each new 
>>> pod., like statefulsets(petsets) are able to do it. 
>>> > >> 
>>> > >> -- 
>>> > >> You received this message because you are subscribed to the Google 
>>> Groups "Kubernetes user discussion and Q&A" group. 
>>> > >> To unsubscribe from this group and stop receiving emails from it, 
>>> send an email to kubernetes-use...@googlegroups.com. 
>>> > >> To post to this group, send email to kubernet...@googlegroups.com. 
>>> > >> Visit this group at 
>>> https://groups.google.com/group/kubernetes-users. 
>>> > >> For more options, visit https://groups.google.com/d/optout. 
>>> > > 
>>> > > -- 
>>> > > You received this message because you are subscribed to the Google 
>>> Groups "Kubernetes user discussion and Q&A" group. 
>>> > > To unsubscribe from this group and stop receiving emails from it, 
>>> send an email to kubernetes-use...@googlegroups.com. 
>>> > > To post to this group, send email to kubernet...@googlegroups.com. 
>>> > > Visit this group at https://groups.google.com/group/kubernetes-users. 
>>>
>>> > > For more options, visit https://groups.google.com/d/optout. 
>>> > 
>>> > -- 
>>> > You received this message because you are subscribed to the Google 
>>> Groups "Kubernetes user discussion and Q&A" group. 
>>> > To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to kubernetes-use...@googlegroups.com. 
>>> > To post to this group, send email to kubernet...@googlegroups.com. 
>>> > Visit this group at https://groups.google.com/group/kubernetes-users. 
>>> > For more options, visit https://groups.google.com/d/optout. 
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to kubernetes-use...@googlegroups.com <javascript:>.
>> To post to this group, send email to kubernet...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes-users]... Naseem Ullah
    • Re: [kubernet... 'Tim Hockin' via Kubernetes user discussion and Q&A
      • Re: [kube... Naseem Ullah
        • Re: [... 'Tim Hockin' via Kubernetes user discussion and Q&A
          • R... Naseem Ullah
            • ... 'Jing Xu' via Kubernetes user discussion and Q&A
              • ... Naseem Ullah
                • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
                • ... Naseem Ullah
            • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
              • ... Naseem Ullah
              • ... Naseem Ullah
          • R... David Rosenstrauch
            • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
              • ... David Rosenstrauch
            • ... Naseem Ullah

Reply via email to