question

Upvotes
Accepted
21 0 0 2

Cluster/Redundancy/Failover for a non interactive provider

Hi, let's assume I have to publish an update for a RIC via a non interactive provider every X seconds. How to implement it, is more or less clear from the docs and code samples. What is not clear is how to provide any sort of a Cluster/Redundancy/Failover for that non interactive provider.

Can I start 2 or more identical non interactive providers in a active-active cluster so that all of them publish updates every X seconds? How a consumer will see the updates? Does Refinitiv platform provide any de-duplication mechanism? or consumers will see all updates from all non interactive providers cluster nodes?

Or should I have my non interactive provider be run in an active-standby cluster? So in case of the main node crash the standby one is promoted to the main node role?

What is the best/recommended practice to provide Cluster/Redundancy/Failover for a non interactive provider implementation?

trep#technology#productnon-interactive-provider
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
25.3k 87 12 25

Hi @nariman.ab

Not an expert on the ADH side of things, but the reason I pointed you down that route is that utilizing the ADH features makes the developer's life much simpler - you don't have to worry about synching up your NIP instances and implementing failover/redundancy etc at the application level,

If HotStandby is enabled - you need to run two (or more) instances of your NIP publishing the exact same data. The ADH can then take care of the situation when one of your NIP instances fails/loses connectivity etc.

You have to be quite confident that both NIP instances will publish the exact same updates with a small temporal difference as possible. This time window can be tweaked on the ADH by your Market Data team e.g.

*serviceName*hotStandby*temporalDifference : time_in_seconds

Sets the maximum difference, in seconds, between the update streams on two hot standby source applications/datafeeds. Increasing this value too high may result in a delayed failover, while decreasing this value too low may result in the standby server not being able to provide updates in a seamless way.
Default Value: 1

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
25.3k 87 12 25

Hi @nariman.ab

Does your organization have multiple ADH servers that you can connect individual instances of NIPs to?
If you do have more than one, then the easiest (from a development perspective) and most reliable route would be to have two (or more) instances of your NIP connecting to unique ADH servers which have been configured to support HotStandby for your NIPs. It would be expected that each of your NIPs would be publishing identical data to their individual ADH instances.

There are additional requirements mandated by the ADH for Hot Standby to work. Your Market Data team should be able to advise you of these - if they are unsure, they can refer to the ADH System Admin guide section 'Hot Standby Requirements for Source Application'



icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @umer.nalla, thank you for the answer. However, I am more concerned about an application implementing NIP. Let's assume the scenario you mentioned (2 ADH servers, one active, one in a standby mode). It means I have 2 applications (with identical NIP implementation). And this applications must be always "up". If one of them goes down I could have a service downtime.

To sum up: at any given point in time only one NIP can send data to an active ADH server, otherwise consumers will see data duplicates (identical events from 2 or more identical NIP applications), correct?

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.