Requests slower than expected and request handling sequence

A couple of questions on behalf of DSS prospect
1. Our
developers have been doing some performance testing of DSS REST API TimeSeries
requests, this has prompted a few questions that you may be able to help
with...
We're
finding this type of requests slower than expected.
- When running a testing using
all of the securities within the Henderson 2 year test data (where RIC
available is about 11000 instruments), requesting data for a period of 7
days, took 20 minutes to return everything. - A second test using test code
from from Thomson Reuters development community forum produces this
result:
This
prompts our first question: Is there any throttling or other performance
limits for our test account that might explain the slower than expected speed
or is this typical? Is there anything we might try that could speed this up?
Secondly,
when reviewing the documentation we found the following:
Could
you help clarify this for us - does this mean, that requests of the same
type will be processed synchronously on the server even if a request is
asynchronous?
The
follow on from both of these, is a more general question regarding requests: what's
the most efficient way to load timeseries data for a big list of IDs and a few
years date range?
Best Answer
-
To your first query:
Performance depends on the server load, which varies during the day, making it very difficult to give numbers. That said, those you mention do not seem unreasonable.
Note: once the data has been delivered, the next get on the location URL will deliver a 404; that is normal, because the data was delivered in the previous call.
To your second query, on:
Multiple requests for the same data type, like tick data, will be placed in one queue, whereas requests for tick data and Intraday bars use independent queues.
This means that requests of the same type are all placed in the same queue. If that queue has a lot of pending requests, processing might take more time than a request for a different type of data that will be placed in a different queue (which could have very few pending requests and therefore process faster).
For that reason, if you have to submit requests for several data types, instead of submitting one, waiting for the results, and then submitting the other, it is worth while submitting both, and checking for results as they become available.
To your general question:
For efficiency, it is best to:
- Avoid (if possible) placing requests at high server load times, like the hours just after big markets close). Weekends are best.
- Make few request with many instruments (within the allowed limits), instead of multiple small requests.
I'm trying to understand your use case. Will you need to make such large requests on a regular basis, maybe even daily, or is this a once-of to fill a database ?
0 - Avoid (if possible) placing requests at high server load times, like the hours just after big markets close). Weekends are best.
Answers
-
Hi Christiaan,
I'm using the Dex2 COM API for VBA to get historical performance data (20 fields) for about 10k instruments. It takes me also about from 20min up to more than 1h, which I think it's a very long time!
I also noticed different running times based on time of the day - could you be a bit more specific on the peaks?
Second question about efficiency: i'm sending requests in batches of 500 instruments for 20 TR. fields - is that optimal or should I rather ask for batches of 2000 instruments but only 5 fields?
Third question about efficiency: how slow is the AVAIL function? Does dividing the assets in categories (Bonds, Equities, Funds) make sense in term of efficiency, so that I can tailor the query with the assetcategory-specific fields instead of using AVAIL?
Many thanks!
0 -
@nicola.pestalozzi, your query seems to be for the Eikon Data API, whereas this space of the forum (and the trail above) are for the DataScope Select API, a different product. Please post this query in the Eikon Data APIs forum space.
0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 688 Datastream
- 1.4K DSS
- 625 Eikon COM
- 5.2K Eikon Data APIs
- 11 Electronic Trading
- 1 Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 255 ETA
- 557 WebSocket API
- 39 FX Venues
- 15 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 23 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 276 Open PermID
- 44 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 23 RDMS
- 1.9K Refinitiv Data Platform
- 695 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 27 DACS Station
- 121 Open DACS
- 1.1K RFA
- 105 UPA
- 194 TREP Infrastructure
- 229 TRKD
- 918 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 92 Workspace SDK
- 11 Element Framework
- 5 Grid
- 18 World-Check Data File
- 1 Yield Book Analytics
- 48 中文论坛