question

Upvote
Accepted
36 9 5 15

Latency problem of Update Message

Hi everyone

In my environment, we found the update messages are OK at the very beginning, but slowly there is a latency problem show up; in other words, when the RFA program runs for a while, the update message will be out-of-date comparing to the market.

Anyone know what is the bottleneck here?

Thanks in advance.

treprfarfa-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
25.3k 87 12 25

The usual cause for the scenario you are describing is where the consumer application is not keeping up with the rate of updates being provided by the server i.e. slow consumer.

This issue has been addressed in another post - RFA Memory Growth - in terms of memory growth - but as well as memory growth, it can also affect the timeliness of the data.

If you imagine that updates are arriving from the server at a rate of e.g. 10,000 updates a second, but your app can only process e.g. 5,000 updates a second.

As time goes on, the event queue will continue to grow with more and more events pending for processing. By the time the app gets round to processing a given event from the queue, it is already out-of-date. Initially it might only be a few ms old, but as the queue grows, you could end up with events that have been sat in the queue for several seconds or even minutes, before the app processes it.

There are two main ways of mitigating this issue

  1. improve the event processing ability of your app - e.g. optimising the event processing capability of your app (and/or if the hardware can be proved to be under-powered, then use more performant hardware).
  2. Asking your MarketData team to provide you with a Conflated feed (or on the fly conflation) so that your app receives less data from the server

One way of optimising your app would be to use Horizontal scaling feature of the RFA API. Essentially this involves using multiple sessions / connections with multiple threads across cores to spread the processing.
You could also look at the processing code you currently have to minimise the time it spends processing the event before returning control back to the API.

For details on Horizontal Scaling please refer to the RFA Developer Guide and RFA Config Guide that comes with the devkit. They can also be found here:

RFA C++ Documentation

RFA Java Documentation

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
4.4k 10 6 9

That sounds like a slow consumer problem, i.e. the application cannot keep up with the update rate and the messages are piling up in the RFA event queue.

How fast the application is processing the updates? Try making the processEvent() more efficient by moving slow or process-heavy code to another thread.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
281 1 3 7

Basically, when the application is experiencing the slow consumer condition, there will be a lot of event messages pending (for processing) in the event queue. As a result, the application will also experience both memory growth and data delay problems (the longer the event remains the event queue, the longer the delay incurs).

Generally, the slow consumer condition can be confirmed by checking the number of data events in the event queue. A typical approach is to check the value returned from the EventQueue.dispatch method, which provides the estimated number of events contained in the event queue. If the value returned is large (i.e. a large number of events in the queue), this would confirm the slow consumer condition.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.