Have you tried changing network properties like throttle-inbound and
throttle-outbound ?

--- In [email protected], "klumikaze" <[EMAIL PROTECTED]> wrote:
>
> Our backend application is pumping a large number of messages through
an ActiveMQ
> queue (upwards of 6,000 typically) to our frontend application (Flex
based) through Flex
> Data Services.
>
> We are using Jconsole to track the number of messages that go in to
the queue, and the
> number of messages that get pulled off of the queue. In our testing,
100% of the time Flex
> Data Services consumes 100% of the messages off of the queue. Roughly
30% of the time
> however, the Flex application's Consumer receives only a small portion
of the total
> messages from FDS (the number varies, but it is usually around 5-10%
of the total
> messages).
>
> We have looked through the FDS documentation extensively, and added
all appropriate
> settings to both our messaging-config.xml and our services-config,
here are some
> excerpts of those settings:
>
> Our queue as defined in messaging-config.xml:
>
> <destination id="MQ00001">
> <adapter ref="jms" />
> <properties>
> <network>
> <session-timeout>0</session-timeout>
> </network>
> <jms>
> <destination-type>Queue</destination-type>
> <message-type>javax.jms.TextMessage</message-type>
> <connection-factory>ConnectionFactory</connection-factory>
> <destination-jndi-name>dynamicQueues/MQ00001</destination-
> jndi-name>
> <delivery-mode>PERSISTENT</delivery-mode>
> <message-priority>DEFAULT_PRIORITY</message-priority>
> <acknowledge-mode>AUTO_ACKNOWLEDGE</acknowledge-mode>
> <transacted-sessions>false</transacted-sessions>
>
> <initial-context-environment>
> <property>
> <name>Context.INITIAL_CONTEXT_FACTORY</name>
>
> <value>org.apache.activemq.jndi.ActiveMQInitialContextFactory</value>
> </property>
> <property>
> <name>Context.PROVIDER_URL</name>
> <value>tcp://192.168.2.187:61616</value>
> </property>
> </initial-context-environment>
>
> </jms>
> <server>
> <max-cache-size>100000</max-cache-size>
> <message-time-to-live>0</message-time-to-live>
> <durable>true</durable>
> </server>
> </properties>
> <channels>
> <channel ref="my-rtmp" />
> </channels>
> </destination>
>
> As you can see, we are setting generous values for our max-cache-size
(number of
> messages to cache), and making sure that our messages live forever on
the queue, and we
> are also ensuring delivery by setting their durable setting to true.
We also ensure that our
> consumers never automatically disconnect from the queue
(session-timeout = 0).
>
> Our RTMP channel definition in services-config.xml:
>
> <channel-definition id="my-rtmp"
class="mx.messaging.channels.RTMPChannel">
> <endpoint uri="rtmp://{server.name}:443"
> class="flex.messaging.endpoints.RTMPEndpoint"/>
> <properties>
> <idle-timeout-minutes>720</idle-timeout-minutes>
> <client-to-server-maxbps>10000K</client-to-server-maxbps>
> <server-to-client-maxbps>10000K</server-to-client-maxbps>
> <accept-backlog>100000</accept-backlog>
> </properties>
> </channel-definition>
>
> Again, we are setting generous values for all of our values here to
ensure delivery of
> messages.
>
> We had initially thought that perhaps this was based on the UDP
protocol and packets
> were getting dropped (for whatever reason). After some research, we
are led to believe
> that RTMP (the protocol of which these messages are transferred over)
is based on TCP,
> which would mean that there shouldn't be any dropping of packets. We
have pretty much
> ruled this out as a possible problem.
>
> Does anybody have any experience with these types of issues? Or any
thoughts on what
> else we might look at for settings, optimization, etc?
>
> Help for any Adobe guys or general gurus in Flex Data Services would
be much
> appreciated.
>
> Thanks for your time,
>
> Brian
>



Reply via email to