question

Upvotes
Accepted
3 1 1 2

refinitiv-dataplatform Streaming fails

Hello,

We use refinitiv-dataplatform=1.0.0a7 to get real-time prices. From time to time I see an error in my logs:

2021-03-10 23:37:21,537 - Session session.platform - Thread 140598623328000 | WebSocket 0 - OMM Protocol - pricing
WebSocket error occurred for web socket client 1 (login id 570) : Connection is already closed.
Exception in callback OMMStream._on_reconnect(<FailoverStat...verStarted: 0>, 'Open', 'Suspect', 'FailoverStarted', 'Streaming co...g to recover.')()
handle: <Handle OMMStream._on_reconnect(<FailoverStat...verStarted: 0>, 'Open', 'Suspect', 'FailoverStarted', 'Streaming co...g to recover.')()>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/nest_asyncio.py", line 150, in run
    ctx.run(self._callback, *self._args)
  File "/usr/local/lib/python3.8/site-packages/refinitiv/dataplatform/delivery/stream/omm_stream.py", line 237, in _on_reconnect
    self._on_status(status_message)
  File "/usr/local/lib/python3.8/site-packages/refinitiv/dataplatform/delivery/stream/omm_item_stream.py", line 334, in _on_status
    super()._on_status(status)
  File "/usr/local/lib/python3.8/site-packages/refinitiv/dataplatform/delivery/stream/omm_stream.py", line 284, in _on_status
    if new_status_code == 'Open':
NameError: name 'new_status_code' is not defined

Why does it happen? And how to fix it?
OS: Debian GNU/Linux 10 (buster)

rdp-apirefinitiv-data-platformrefinitiv-data-platform-libraries
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
47.2k 110 44 60

@Alena.Melnikova

I found two issues here.

1. The connection is closed (WebSocket error occurred for web socket client 1 (login id 570) : Connection is already closed)

2. NameError: name 'new_status_code' is not defined

For the second issue, it should be resolved in 1.0.0a7.post7.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

How to fix the first issue?

@Alena.Melnikova

For the first issue, it is better to contact the server team (Refinitiv Real-Time - Optimized) via MyRefinitiv to verify the reason for the disconnection at that time.

After knowing the reason, we can find a way to fix it.

Upvotes
24.5k 86 10 22

Hi @Alena.Melnikova

The error message indicates that the server you are connecting to has closed the connection.

Can you please provide more information to help with understanding the issue?

Are you connecting to an ADS (deployed) or to Real-Time Optimized (Cloud)?

How many instruments are you subscribing to?

What kind of processing do you do with the update payload as it is received in the callback handlers e.g. complex calculations / writing to database etc?


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Also, if the problem can be recreated quite easily, please try creating a separate Python env and install 1.0.0a7.post7 and retest - as 1.0.0a7 is quite old.

pip install refinitiv-dataplatform==1.0.0a7.post7

I do not recommend you install the above in your main python env - as it is not a 100% stable release.

not easy to reproduce. Usually, it works fine for many hours.
I used 1.0.0a7.post7 but got some issues (don't remember which exactly) and switch back to 1.0.0a7.

Upvotes
3 1 1 2

Hi @umer.nalla
We getting D5 prices for 16 currencies (e.g. EUR=D5, JPY=D5). I guess it is Real-Time Optimized though not sure.
We don't do anything special, just send events to Kafka:

def streaming_prices():
    streaming_prices = rdp.StreamingPrices(
        universe=RIC_CCY_D
        on_update=lambda streaming_price, instrument_name, fields:
        display_updated_fields(streaming_price, instrument_name, fields)
    )
    streaming_prices.open()
    while True:
        try:
            asyncio.get_event_loop().run_until_complete(asyncio.sleep(1))
        except (KeyboardInterrupt, SystemExit):
            rdp.close_session()
            break

def display_updated_fields(streaming_price, instrument_name, fields):
    fx_timestamp = datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc).isoformat().replace("+00:00", "Z")
    fields['fx_timestamp'] = fx_timestamp
    fields['ric'] = instrument_name
    message = json.dumps(fields)
    producer.produce(topic=my_topic, value=message, callback=acked)
    producer.poll()

We can get a few dozens of error messages during one second and the next second it works again as usual.


screen-log.png (63.2 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

To confirm if you are using ADS or Cloud connection, please confirm the names of parameters you pass in when creating your Platform Session - do you pass in a Grant with Username and Password - or deployed_platform_host and deployed_platform_username?

Do not post the username / password etc here - just the name of parameters you are passing.

Also, I am not familiar with Kafka - so can you confirm how long the producer() calls hold onto the thread before returning control back to the display_updated_fields() method?

A common reason for server disconnect is when the RDP Library is not able to respond to PING messages sent by the server in a timely manner or it cannot read all the update events from the buffer quickly enough. I would be surprised if this would be an issue for only 16 RIC watchlist - unless the callback function really is taking too long to process the data and the update events are buffering up until the buffer overflows and the server disconnects you.


We use rdp.GrantPassword(username, password).
Kafka producer usually takes a few milliseconds. Though this error "connection close" lasts dozens of milliseconds. Not sure if these are related things.

Hi @Alena.Melnikova

As you are connecting to the cloud, there could be issues with the RRTO Service - but if this is happening consistently, then this is unlikely.

I will reach out to the RDP Library dev team to see if they provide further guidance.

Show more comments