[openstack-dev] How to setup keystone using httpd and mod_wsgi?

2015-03-13 Thread Omkar Joshi
Hi,


Please let me know if there is any document for this? I am using openstack
icehouserdo.

-- 
Thanks,
Omkar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Controlling data sent to client

2015-03-10 Thread Omkar Joshi
Thanks Rick for a quick reply.

Are you asking about the rate at which data might come from the object
server(s) to the proxy and need to be held on the proxy while it is sent-on
to the clients? Yes... the object sever will push faster and therefore
accumulation of data in proxy server will be a lot if client is not able to
catch up. Shouldn't there be a back pressure? from client to proxy server
and then from proxy server to object server?

something like don't cache more than 10M at a time per client.?

On Tue, Mar 10, 2015 at 11:59 AM, Rick Jones  wrote:

> On 03/10/2015 11:45 AM, Omkar Joshi wrote:
>
>> Hi,
>>
>> I am using open stack swift server. Now say multiple clients are
>> requesting 5GB object from server. The rate at which server can push
>> data into server socket is much more than the rate at which client can
>> read it from proxy server. Is there configuration / setting which we use
>> to control / cap the pending data on server side socket? Because
>> otherwise this will cause server to go out of memory.
>>
>
> The Linux networking stack will have a limit to the size of the SO_SNDBUF,
> which will limit how much the proxy server code will be able to shove into
> a given socket at one time.  The Linux networking stack may "autotune" that
> setting if the proxy server code itself isn't making an explicit
> setsockopt(SO_SNDBUF) call.  Such autotuning will be controlled via the
> sysctl net.ipv4.tcp_wmem
>
> If the proxy server code does make an explicit setsockopt(SO_SNDBUF) call,
> that will be limited to no more than what is set in net.core.wmem_max.
>
> But I am guessing you are asking about something different because
> virtually every TCP/IP stack going back to the beginning has had bounded
> socket buffers.  Are you asking about something else?  Are you asking about
> the rate at which data might come from the object server(s) to the proxy
> and need to be held on the proxy while it is sent-on to the clients?
>
> rick
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Omkar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Controlling data sent to client

2015-03-10 Thread Omkar Joshi
Hi,

I am using open stack swift server. Now say multiple clients are requesting
5GB object from server. The rate at which server can push data into server
socket is much more than the rate at which client can read it from proxy
server. Is there configuration / setting which we use to control / cap the
pending data on server side socket? Because otherwise this will cause
server to go out of memory.

-- 
Thanks,
Omkar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev