0 0 0 1

C++ EMA Horizontal Scaling Limitations / Connection Lost


we are experiencing a similar situation as the one mentioned on this thread

We have attempted horizontal scaling, as shown in this example

However, we are still hitting the connection lost issue.

To further isolate the cause of the problem, we need to understand whether there still exists a risk

of an internal buffer ending up full.

For example, each consumer thread issues this call

In the dispatch method, we see that there is a specified time budget for several things to happen.

That is, part of the time budget is spent on reading data from a pipe (?), and the remaining time is

spent on executing all kinds of functors that are wrapped in `TimeOut` objects. If our understanding

is correct, then these functors should be responsible for calling the `ConsumerClient::onUpdateMsg` and

similar virtual methods.

If this is the case and our understanding matches what is actually happening in the engine, then

what is the likeliest cause of our trouble?

Can it be that an internal buffer still reaching maximum capacity because we exceed the timeout budget too many times?

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

23.6k 61 15 21

Hi @trca,

I have a few basic questions to get to know your issue in detail. Are you able to run a single instance of your application with all the instruments at full update rate? How many instruments are you subscribing to and how frequently do they update.

The loss of connection point out to either network issues, or a slow consumer - i.e. the EMA application does not process the updates fast enough and gets dropped by the market data server. The solution is to keep minimal data processing in the event callback loop.

Can you try these few suggestions and see if this helps.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hello Gurpreet,

and thank you for your response and suggestions.

Here are our findings:

- context: we were subscribing approximately 10K instruments

- we store prices snapshots and import them in our application every 10 minutes (incoming data arrival rates are usually 3/second, which is a lot more frequent than the rate at which we actually ingest them) - is there a way to specify a behavioral policy here? (e.g. it would be great if we could request explicitly that the latest prices snapshots for the subscribed instruments are sent, and not react via consumer clients virtual methods to any potential update, as we currently do)

- we have further reduced the work done in the respective handlers (the virtual `onUpdate` or `onRefresh` message handling methods) - this, coupled with the horizontal scaling strategy has indeed helped

So, in all, we are bound by the rate at which we can import the data we have acquired up to a certain point.

If there is no recommendation or possibility to change the API policy (as I suggested above), then I would simply close the question for now and draw the following conclusion: ** it is paramount that the time spent in the consumer client callbacks be reduced to an absolute minimum and subsequent data processing be deferred/transferred to a separate worker thread (carefully managing contention) **.

Hi @trca

Indeed your conclusions seem about right - as you are consuming data real-time streaming data and the API is an event driven one.

There is an option to just periodically snap data (see example Consumer 102) - rather than receive streaming data - but not sure if that would meet your requirements in terms of timeliness - please test to explore.

Also, I just wanted to check if you are using Views (Field filtering) as demoed in the Consumer360 example. If you are only interested in a small subset of fields, using views may reduce the number of Updates you receive - i.e. any updates that do not contain the fields you are interested in.

One final point, I assume you are consuming data from our cloud based RTO feed. If the application you are developing were to consume data from one of our deployed servers (e.g. an RTC or ADS) - there would be the option for the server to be configured to provide a conflated feed - e.g. one update every second or say every 100ms.

23.6k 61 15 21

Hi @trca,

Yes, the underlying issue is too much time spent in the event processing loop. In the realtime SDK, there is no means to skip this event/async behavior - just work on making the loop faster, and defer any heavy processing to other threads in the application. You can also use View feature to limit the amount of data received.

You didn't specify where you are getting data from. If this is from your in-house market data system, your administrator might be able to configure a conflated service which reduces the amount of data your application receives. Modern RTMDS components also have a REST interface for requesting snapshot data.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hello Gurpreet, we are using a test RTMDS connection subscribing to the ELEKTRON_DD universe, requesting approximately 9800 instruments (mostly BID and ASK prices). We are indeed requesting a view, hence not all data for each instrument.

We've streamlined the application and were able to avoid the said issue.
We'll consider the REST interface for a future implementation, but so far we are able to implement a stable price snapshot capturing strategy, without the need of employing further API features.
Thank you so much for your kind feedback.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.