How to get historical ref data using Python TRTH for a small universe?

Hi, I'm following the TRTH_OnDemand_IntradayBars Python example (available under https://developers.refinitiv.com/thomson-reuters-tick-history-trth/thomson-reuters-tick-history-trth-rest-api/downloads) to get HistoricalReferenceExtractionRequest (instead of TickHistoryIntradaySummariesExtractionRequest).
My template looks something like this:
{
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.HistoricalReferenceExtractionRequest",
"ContentFieldNames": [
"RIC",
"Exchange Code",
"Security Description",
"Currency Code",
"Expiration Date",
"RIC Root",
"Trading Status",
"Underlying RIC",
"Put Call Flag",
"Start date",
"Thomson Reuters Classification Scheme"
],
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [{
"Identifier": "0#NIY:",
"IdentifierType": "ChainRIC"
},
{
"Identifier": "0#RTY:",
"IdentifierType": "ChainRIC"
}],
"ValidationOptions": { "AllowHistoricalInstruments": true},
"UseUserPreferencesForValidationOptions": false
},
"Condition": {
"StartDate": "__QUERY_START_TIME_",
"EndDate": "__QUERY_END_TIME_"
}
}
}
Everything looks good, except for when requesting a small universe (like in the template above).
The example assumes that there's always a 202 status code (extraction not finished - must wait) before a 200 code (extraction completed), but this is not always the case with a small universe. As a result, the code fails on the step #3 due to "r3" variable not defined.
#As long as the status of the request is 202, the extraction is not finished;
#we must wait, and poll the status until it is no longer 202:
while (status_code == 202):
print ('As we received a 202, we wait 30 seconds, then poll again (until we receive a 200)')
time.sleep(30)
r3 = requests.get(requestUrl,headers=requestHeaders)
status_code = r3.status_code
print ('HTTP status of the response: ' + str(status_code))
#When the status of the request is 200 the extraction is complete;
#we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):
if status_code == 200 :
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
jobId = r3Json["JobId"]
print ('\njobId: ' + jobId + '\n')
notes = r3Json["Notes"]
print ('Extraction notes:\n' + notes[0])
I have not been able to find a workaround for this (other than requesting more instruments that I don't need, which is definitely not ideal). I tried defining "r3", but no luck to get the JobId.
Appreciate your help!
Thanks,
Luz
Best Answer
-
Hello @lcontreras,
For me the simplest would be to modify Step 2 to call:
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractWithNotes'
instead of
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw'
as this will allow you, per REST API Reference "If the results are ready before the request times out, it will return the result. If not, it will return a 202 response for the client to try to get the results from the ExtractResults method."
Next in Step 3 you can check for the result
if status_code == 200:
jsonResponse = json.loads(r2.content)
print(jsonResponse)
#If status is 202, display the location url we received, and will use to poll the status of the extraction request:
elif status_code == 202:
requestUrl = r2.headers["location"]
print ('Extraction is not complete, we shall poll the location URL:')
print (str(requestUrl))
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization":"token " + token
}
#As long as the status of the request is 202, the extraction is not finished;
#we must wait, and poll the status until it is no longer 202:
while (status_code == 202):
...Hope this approach helps, just in case, attaching my modified example pybook, zipped: TRTH_OnDemand_HistoricalReferenceExtractionReq2.zip
0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 685 Datastream
- 1.4K DSS
- 616 Eikon COM
- 5.2K Eikon Data APIs
- 10 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 252 ETA
- 557 WebSocket API
- 38 FX Venues
- 14 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 23 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 275 Open PermID
- 44 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 22 RDMS
- 1.9K Refinitiv Data Platform
- 653 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 27 DACS Station
- 121 Open DACS
- 1.1K RFA
- 104 UPA
- 193 TREP Infrastructure
- 229 TRKD
- 917 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 90 Workspace SDK
- 11 Element Framework
- 5 Grid
- 18 World-Check Data File
- 1 Yield Book Analytics
- 46 中文论坛