TRTH Python API - Queuing Time
Hi,
I received a question "I see that to generate the
intraday summary report with 5sec interval for 1 RIC for 1 month takes
long time (more than 30min excluding queuing time). Is it normal? Is there any
way to quicken the process?"
I could not attach .py File though the code from it is copied below, I have removed the Username and Password from the Python script.
Best regards,
Gareth
-----------------------------------------------------------------------------------------------------------------------------------
# coding: utf-8
# In[4]:
#Step 1: token request
import requests
import json
import time
requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json"
}
requestBody={
"Credentials": {
"Username": ,
"Password": ""
}
}
proxies = {'http': 'http://webproxy.ssmb.com:8080',
'https': 'http://webproxy.ssmb.com:8080'}
r1 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies)
if r1.status_code == 200 :
jsonResponse = json.loads(r1.text.encode('ascii', 'ignore'))
token = jsonResponse["value"]
print ('Authentication token (valid 24 hours):')
print (token)
else:
print ('Please replace myUserName and myPassword with valid credentials, then repeat the request')
# In[5]:
#Step 2: send an on demand extraction request using the received token
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw'
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization": "token " + token
}
requestBody={
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryIntradaySummariesExtractionRequest",
"ContentFieldNames": [
# "Close Ask",
# "Close Bid",
# "High",
# "High Ask",
# "High Bid",
"Last",
# "Low",
# "Low Ask",
# "Low Bid",
# "No. Asks",
# "No. Bids",
"No. Trades",
"Open",
# "Open Ask",
# "Open Bid",
"Volume"
],
# "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest",
# "ContentFieldNames": [
# "Trade - Price",
# "Trade - Volume"
# ],
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [{
"Identifier": "ESU7",
"IdentifierType": "Ric"
},
],
"UseUserPreferencesForValidationOptions":"false"
},
"Condition": {
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2017-06-28T00:00:00.000Z",
"QueryEndDate": "2017-06-29T00:00:00.000Z",
"SummaryInterval": "FiveSeconds",
"TimebarPersistence":"false",
"DisplaySourceRIC":"true"
}
}
}
r2 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies)
#displaying the response status, and the location url to use to get the status of the extraction request
#initial response status (after approximately 30 seconds wait) will be 202
print (r2.status_code)
print (r2.headers["location"])
# In[6]:
#Step 3: poll the status of the request using the received location URL, and getting the jobId and extraction notes
requestUrl = r2.headers["location"]
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization":"token " + token
}
while True:
r3 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies)
if r3.status_code == 200:
break
else:
print('Failed...Re-request in 30 secs...')
time.sleep(30)
#when the status of the request is 200 the extraction is complete, we display the jobId and the extraction notes
print ('response status = ' + str(r3.status_code))
if r3.status_code == 200 :
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
jobId = r3Json["JobId"]
print ('jobId: ' + jobId + '\n')
notes = r3Json["Notes"]
print ('Extraction notes:\n' + notes[0])
else:
print ('execute the cell again, until it returns a response status of 200')
# In[7]:
#Step 4: get the extraction results, using the receive jobId
requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value"
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"text/plain",
"Accept-Encoding":"gzip",
"Authorization": "token " + token
}
r4 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies)
# print (r4.text)
# In[8]:
#Step 5 (cosmetic): formating the response using a panda dataframe
from io import StringIO
import pandas as pd
timeSeries = pd.read_csv(StringIO(r4.text))
timeSeries
# In[ ]:
Best Answer
-
Here this is useful investigation information from case 05649911:
This is an expected behavior of TRTH because it may take longer time to process a request if an input RIC is extremely liquid.
Tick history has to parse lot of ticks to create Intraday summaries on the fly. The intraday extractions are expectedly takes longer while you do the same extraction on time and sales report you will receive a faster response but with huge number of messages.
For another interesting information about the AWS Direct Download feature, it enhances only the download speed of extracted data but not the speed (time) of Processing the data. The time to process the data still remains same like before.0
Answers
-
Hi Team, could someone look into this and provide an update? it's been quite a few days this query was posted
0 -
This issue needs to be investigated. This doesn't appear to be a "How to" to question, so we need more information in order to investigate. From an email I received, there is a case number associated with this issue, 05649911. Please provide the notes, or at least the user id via the case 05649911.
0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 684 Datastream
- 1.4K DSS
- 615 Eikon COM
- 5.2K Eikon Data APIs
- 10 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 249 ETA
- 554 WebSocket API
- 37 FX Venues
- 14 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 23 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 275 Open PermID
- 44 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 22 RDMS
- 1.9K Refinitiv Data Platform
- 643 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 26 DACS Station
- 121 Open DACS
- 1.1K RFA
- 104 UPA
- 192 TREP Infrastructure
- 228 TRKD
- 915 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 90 Workspace SDK
- 11 Element Framework
- 5 Grid
- 18 World-Check Data File
- 1 Yield Book Analytics
- 46 中文论坛