question

Upvotes
Accepted
1 1 1 3

RT-O Channel down when number of RICs subscribed

Hi

My customer is suffering channel down when they subscribe around 5000 RICs on one application.

Program Language - Java

RTSDK Java edition (EMA) is being used.

We are using sample code ( ex450_MP_QueryServiceDiscovery.Consumer )

The channel down can not be replicated on our desktop, so we are assuming the client network or PC may have some kind of limitation.

However the rate of disconnection can be reduced depends on the number of RICs reduced.

So we compared the disconnection rate in between bellow two cases.

Case 1 : 1 app ( 1 session ) with 5000 RICs -> this resulted channel down happened in every 10-15 minutes during market hours.

Case 2 : Multiple apps with less than 1000 RICs ( total RIC number is same as Case 1 ) -> the channel down was still hapenenig but the rate of down was siginifiantly lower than Case 1.


Both test case, same total 5000 RICs are subscribing, so the load of network and machine CPU usage should be also same but we are wondering how come Case 2 is quite less rate than Case 1.


I would like to ask if there can be any parameter on Java RTSDK to optimize for many of RICs to suvscribe ? Wising to reduce the rate of channel down even 5000 RICs subscribed on One application.



elektron#technology
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @noboru.maruyama01 ,

Thank you for your participation in the forum. Is the reply below satisfactory in resolving your query?
If so please can you click the 'Accept' text next to the appropriate reply? This will guide all community members who have a similar question.

Thanks,
AHS

@noboru.maruyama01

Hi,

Please be informed that a reply has been verified as correct in answering the question, and marked as such.

Thanks,

AHS

Upvotes
Accepted
22.1k 59 14 21

They are getting disconnects in both the cases - so it is hard to quantify that one works and other does not.

When multiple applications connect to RTO, they are most likely serviced by a different physical endpoint. So they all are likely getting data from a different server (even when the service discovery endpoint is same). Some of these servers might be less tolerant of how much backlog of data it keeps before kicking the application out.

The client should fix the underlying problem of high latency and low bandwidth. You can also recommend them to move their application to the cloud - closer to the source of the data.


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
22.1k 59 14 21

Hi @noboru.maruyama01,

RTSDK Java should easily handle a batch of 5K instruments. If you are getting a channel down event, it points to a network bandwidth issue.

What is the channel type and protocol that the client is using? They can use the view feature to reduce the number of FIDs they receive in the update message - which should reduce the bandwidth usage.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
68 2 6 5

Hi @Gurpreet

Many thanks for your comment. Yes I also believe 5K RICs should be able to handle as well.

Question here is that how come the customer sees different outcome between cases, a: which is 5K x 1 session and b:1K x 5 sessions ( actually using 5 different Machine IDs ). From network point of view. Both cases , they should receive same volume of data, thus Network bandwidth may not be the simple bottleneck?

Please note I also asked the customer to consider to use VIEW featue and it is under discussion within the developpers in the custiomer.

Thank you

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.