For a deeper look into our Eikon Data API, look into:

Overview |  Quickstart |  Documentation |  Downloads |  Tutorials |  Articles

question

Upvotes
Accepted
0 1 2 4

Pricing stream instrument limit? 5 RICs work, 2300 don't...?

Hi,

since I typed my quation 2 times already now and ur website decided to delete it, i won't be doing it again. Here is my original question:

Hi , I have a quick question about the API. I am trying to stream the quotes from multiple instruments from multiple exchanges but recieven an error. I was wondering if there is a request limit for these kinds of requests (python - refinitiv.data.open_pricing_stream(universe = allstu, fields = ['BID', 'ASK', 'ASKSIZE', 'BIDSIZE'])) in this case the list allstu includes 2308 instruments. What are the limits on these requests? Thanks for ur time!

I hope someone can help me with my question. I wont submit the code either since this developer website interpreted an uploaded python file as a hacking attempt ;-D

Thanks for your help so far!

Case : 11288431 Eikon API (Python) error - from Refinitiv [ ref:_00D30602X._5008Z1tu0E4:ref ]

@zoya.faberov

@kenley.macandog123

@nick.zincone

pythonpython apistreaming-pricessnapshot-pricingpricing-data
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
24.9k 87 10 22

Hi @tom.derwort

Based on what I understand of your requirements, I am not sure Eikon API is the right choice - Eikon is a desktop product and the Eikon API is designed for desktop type usage scenarios,

A requirement to snap 2.5k instruments at 5s intervals is more typically fulfilled by other customers using one of our enterprise streaming services such as Refinitiv real-time via a Deployed ADS or via the Refinitiv Real-time Optimized service which is a cloud-based service - with the application written in using a streaming API such as RTSDK or the Websocket API.

I just did a few quick test runs using a Python Websocket API script to snap 3K instruments from RTO (the cloud based service) and the time taken varied between 2.4s to 5 secs on subsequent runs.

If your organisation has a deployed ADS and/or you can code in C++ / Java - the snap time could be reduced a bit further.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Oh okay, I will think about that option and see if we have such access but I think ill just try to reduce the load then. Im not very experienced in coding and using these kinds of APi but what is the difference to the Excel add in, which, as i described before, does pull 20k real time requests every 5 seconds? and why cant the standard APi connection handle that?

(Thank you for answering tho and for being so patient with my beginners knowledge)

@tom.derwort

What is the Excel add-in function that you used to pull the data?

The source of data may be different.

Hi, so I'm basically doing the following function twice:

=TR(C9:C3135;"BIDSIZE;BID;ASK;ASKSIZE";"UPDFRQ="&D3&"S CH=Fd RH=IN";K8)

The area C9:C3135 obviously contains all the instruments. Thanks for revisting that since I would still be very interested in the difference between API and the excel version!

Best,


Show more comments
Show more comments
Upvote
31.8k 37 11 19

Hello @tom.derwort ,

Sorry to hear that you are facing these issues.

Am I correct to understand that the small excerpt of code with 5 items requested, with the same request, as suggested by @nick.zincone works as expected on your side?

If yes:

There is an issue that may or may not be related to what you observe, that comes up with large instrument lists, that the product is aware of and is looking into. If this is the case, and if you would like to implement the requirement now, you should be able to progress the complete list by partitioning/batching your required instrument list. You can review https://github.com/Refinitiv-API-Samples/Article.RDLibrary.Python.2KUseCase/blob/main/Pricing-StreamingEvents-2K.ipynb

and re-use the code.

If this is not the case:

Please include your version of RD library, the small excerpt of code only to allow us to verify the issue( either paste the excerpt or zip and attach), and your results/errors from the small excerpt of code as suggested by @nick.zincone

Let us know how this works on your side


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Awesome thank u, I will try the fist option since everything worked perfectly with 5 RICs on my end too. I'll try your suggested fix with the chunks but did I understand correctly that this might be a refinitiv (or stream function- ) issue and you guys are trying to fix that? (Would be good to now for the longer term since what I am trying to do would probably work more efficiently / easier through one whole stream, in case that will work at any point in the future?) Thank you for your awesome and quick help so far and I'll get back to you once I tested it!

Hi, sadly it is still not working. The stream doesn't open with or without doing it in batches.

It always seems to stop between the count of 750 and 760 and show the error of a dictionary changing size during itteration. I double checked the RICs and tried with 2 different sets but it doesnt work. I attached a pdf of what I did and my library versions, so it would show the exact error (second to last page) and I think you should still have the excel, where the instruments are coming from. Maybe you could try and check again but it seemed to run for you in your example which confuses me a little. Hope you can figure it out.

@zoya faberov

@nick.zincone

Stream open error.zip

stream-open-error.zip (255.2 KiB)

@tom.derwort

Downgrading to refinitiv-data==1.0.0b9 may help.

Awesome, that fixed it! Now at least it finishes the stream. Still more than half is N/A but at least I get a real error code for "too many items" now.

Thank you!


Show more comments
Upvote
24.9k 87 10 22

Hi @tom.derwort

The message

A20: Aggregate item limit exceeded

is coming from the server from which the streaming data is being sourced. It is not an API restriction.

If you are connecting to a deployed ADS at your organisation, then please speak to your local Market Data team to see if they can increase the Aggregate limit for your user.

The aggregateItemLimit parameter set on the ADS specifies the maximum number of items which the specified user is allowed to have open simultaneously across all services.

If, however, you are connecting to our Refinitiv Real-Time Optimised Cloud-based service, then I recommend you speak to your Refinitiv account team to see if this figure can be increased. I have certainly tested with consuming 3k-4k open instruments from RTO.


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi, so if i close all other requests on excel and Eikon itself I could increase the number of requests I can pull trhough python/API or did I get that wrong now?

Hi @tom.derwort

Given how the aggregateItemLimit is applied, then yes - closing other open requests should allow you to increase the number of items.

However, I have been advised by the Eikon API team that there is a limit of 2500 open items - and the default config value for aggregateItemLimit is 2400. However, many organisations set a higher number.

To be clear, are you interested in ticking data - i.e. receiving updates or just snapping the latest values every 5 seconds? If you are only interested in snapping, then you can chunk your batch your request - to snap more than 2500 items.

Oh okay, so I am interested in a real time snapshot of the dataevery 5 seconds, so no need for tick data. I already thought about batching but my problem was that within that loop of going through the 2,5k instrument chunks I would have to open the streams, get the data, close it and then do the same for the next chunk, or use the normal get_data but trying both options(in jupyter) I noticed that the process of getting the data both ways, already takes onger than 5 seconds, which means the loop would take way longer. Is that right or does it maybe just take that long because of jupyter or something else?
Upvote
31.8k 37 11 19

Hello @tom.derwort ,

If you are working with DesktopSession, connecting to local Worksapce/Eikon desktop, and looking to stream 10k instruments, you may wish to approach the requirement by chunking/batching into batches of 2k. If you are running via Jupyter notebook/lab, you may additionally try to minimize the level of output, to be able to easily track the batches of instruments as completed (see the example link in the previous answer).

How many instruments are you looking to stream? As larger list requirements may not be best suited for RD library python implementation via DesktopSession, that interacts with Eikon desktop via Eikon proxy.

I fully agree with @umer.nalla, the API access will support request not to the excess of the server side limit, permissioned to the connecting user.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi, thank you for your answer. The problem is that if I chunk the data and seperately open a stream, request the data and then close the stream to move onto the next chunk, it needs to much time to do that. It seems to me that I can only open a stream for 10k data points right? I totally understand the restrictions and why they are in place I'm just confused why I can do the same thing in excel for around 20kdata points (2500 instruments * 8 fields (Bid, Ask, ...). I was looking to retrieve around 40k data points in an intervall of 5 seconds. So is that in the possibility of my current situation or will I need to subscribe to some other option to do that? My original thought was to reduce load on the excel application and to increase the covered data but with a limit of 10k I would be effectively reducing the coverage if u know what I mean...?


Thanks for all the help so far :-D

Hi @tom.derwort ,

You are absolutely correct, 40k snapshots every 5 sec will be too high for this type of integration. As suggested by @umer.nalla, looking into RTDS and RRTO would be more appropriate for this type of requirement, and, even in enterprise product space, may be worthwhile to consider streaming subscription, applying delta and deriving snapshot every 5 sec, rather then requesting it every 5 secs.

I have timed 10k snapshots chunked using RD and Desktop session via python notebook in Pricing-Snapshots-2K.ipynb -> 10k chunked just to see where the ballpark time would be, and it's not even close to being completed in under 5 secs. At this time, in my environment it's around 45 sec. The naked .py script should be faster then pybook, but not that much faster.

Hope this info helps