question

Upvotes
Accepted
462 13 11 15

TRTH Python API - Queuing Time

Hi,

I received a question "I see that to generate the intraday summary report with 5sec interval for 1 RIC for 1 month takes long time (more than 30min excluding queuing time). Is it normal? Is there any way to quicken the process?"

I could not attach .py File though the code from it is copied below, I have removed the Username and Password from the Python script.

Best regards,

Gareth

-----------------------------------------------------------------------------------------------------------------------------------

# coding: utf-8 # In[4]: #Step 1: token request import requests import json import time requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken" requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json" } requestBody={ "Credentials": { "Username": , "Password": "" } } proxies = {'http': 'http://webproxy.ssmb.com:8080', 'https': 'http://webproxy.ssmb.com:8080'} r1 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies) if r1.status_code == 200 : jsonResponse = json.loads(r1.text.encode('ascii', 'ignore')) token = jsonResponse["value"] print ('Authentication token (valid 24 hours):') print (token) else: print ('Please replace myUserName and myPassword with valid credentials, then repeat the request') # In[5]: #Step 2: send an on demand extraction request using the received token requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw' requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json", "Authorization": "token " + token } requestBody={ "ExtractionRequest": { "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryIntradaySummariesExtractionRequest", "ContentFieldNames": [ # "Close Ask", # "Close Bid", # "High", # "High Ask", # "High Bid", "Last", # "Low", # "Low Ask", # "Low Bid", # "No. Asks", # "No. Bids", "No. Trades", "Open", # "Open Ask", # "Open Bid", "Volume" ], # "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest", # "ContentFieldNames": [ # "Trade - Price", # "Trade - Volume" # ], "IdentifierList": { "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList", "InstrumentIdentifiers": [{ "Identifier": "ESU7", "IdentifierType": "Ric" }, ], "UseUserPreferencesForValidationOptions":"false" }, "Condition": { "MessageTimeStampIn": "GmtUtc", "ReportDateRangeType": "Range", "QueryStartDate": "2017-06-28T00:00:00.000Z", "QueryEndDate": "2017-06-29T00:00:00.000Z", "SummaryInterval": "FiveSeconds", "TimebarPersistence":"false", "DisplaySourceRIC":"true" } } } r2 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies) #displaying the response status, and the location url to use to get the status of the extraction request #initial response status (after approximately 30 seconds wait) will be 202 print (r2.status_code) print (r2.headers["location"]) # In[6]: #Step 3: poll the status of the request using the received location URL, and getting the jobId and extraction notes requestUrl = r2.headers["location"] requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json", "Authorization":"token " + token } while True: r3 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies) if r3.status_code == 200: break else: print('Failed...Re-request in 30 secs...') time.sleep(30) #when the status of the request is 200 the extraction is complete, we display the jobId and the extraction notes print ('response status = ' + str(r3.status_code)) if r3.status_code == 200 : r3Json = json.loads(r3.text.encode('ascii', 'ignore')) jobId = r3Json["JobId"] print ('jobId: ' + jobId + '\n') notes = r3Json["Notes"] print ('Extraction notes:\n' + notes[0]) else: print ('execute the cell again, until it returns a response status of 200') # In[7]: #Step 4: get the extraction results, using the receive jobId requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value" requestHeaders={ "Prefer":"respond-async", "Content-Type":"text/plain", "Accept-Encoding":"gzip", "Authorization": "token " + token } r4 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies) # print (r4.text) # In[8]: #Step 5 (cosmetic): formating the response using a panda dataframe from io import StringIO import pandas as pd timeSeries = pd.read_csv(StringIO(r4.text)) timeSeries # In[ ]:

pythontick-history-rest-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi Team, could someone look into this and provide an update? it's been quite a few days this query was posted

@Beera.Rajesh

Asked DSS SWAT team to provide guidance on this

I've checked the case detail in Service Cloud already. It seems like TR is still investigating this problem. So, I'll extend the triage 1 week further.

Message Date10/07/2017 22:39
----------------------------
Hi Zhengyuan,

I would like to inform that contracts under chain <0#ES:> are volatile hence retrieving 5 sec intraday summaries data for a month might take little longer than usual.

Meanwhile, I am also checking with our development team that if this is expected or we can quicken the extraction process.

Regards,

Beera Rajesh

Upgrade Specialist - TRTH

Thomson Reuters

Case number is 05649911. There still is no further update.

Upvotes
Accepted
1.9k 7 10 16

Here this is useful investigation information from case 05649911:

This is an expected behavior of TRTH because it may take longer time to process a request if an input RIC is extremely liquid.



Tick history has to parse lot of ticks to create Intraday summaries on the fly. The intraday extractions are expectedly takes longer while you do the same extraction on time and sales report you will receive a faster response but with huge number of messages.



For another interesting information about the AWS Direct Download feature, it enhances only the download speed of extracted data but not the speed (time) of Processing the data. The time to process the data still remains same like before.
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
116 1 0 2

This issue needs to be investigated. This doesn't appear to be a "How to" to question, so we need more information in order to investigate. From an email I received, there is a case number associated with this issue, 05649911. Please provide the notes, or at least the user id via the case 05649911.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.