question

Upvotes
Accepted
23 2 6 8

Market Depth data using Python on TRTH

Hi,

I am using python to get the market depth data for ICE-Brent using the code below. But the output is resulting in an empty dataframe. (Please check the end of the post for output). I am unable to understand the error.



from json import dumps, loads

from requests import post

from requests import get

import pandas as pd

import os

from time import sleep


ticker="LCOM0"

def RequestNewToken(username="",password=''):

_AuthenURL = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"

_header= {}

_header['Prefer']='respond-async'

_header['Content-Type']='application/json; odata.metadata=minimal'

_data={'Credentials':{

'Password':password,

'Username':username

}

}


print("Send Login request")

resp=post(_AuthenURL,json=_data,headers=_header)


if resp.status_code!=200:

message="Authentication Error Status Code: "+ str(resp.status_code) +" Message:"+resp.text

raise PermissionError(dumps(message))


return loads(resp.text)['value']


#Function ExtractRAW

def ExtractRaw(token,json_payload,ticker):

_extractRawURL="https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw"

#Setup Request Header

_header={}

_header['Prefer']='respond-async'

_header['Content-Type']='application/json; odata.metadata=minimal'

_header['Accept-Charset']='UTF-8'

_header['Authorization']='Token'+token


#Send HTTP post message to DSS server using extract raw URL


resp=post(_extractRawURL,data=None,json=json_payload,headers=_header)

#Raise exception with error message if the returned status is not 202 (Accepted) or 200 (Ok)

if resp.status_code!=200:

if resp.status_code!=202:

message="Error: Status Code:"+str(resp.status_code)+" Message:"+resp.text

raise Exception(message)


#Get location from header

_location=resp.headers['Location']

print("Get Status from "+str(_location))

_jobID=""


#pooling loop to check request status every 2 sec.

while True:

resp=get(_location,headers=_header)

_pollstatus = int(resp.status_code)


if _pollstatus==200:

break

else:

print("Status:"+str(resp.headers['Status']))

sleep(600) #wait for _retryInterval period and re-request the status to check if it already completed


# Get the jobID from HTTP response

json_resp = loads(resp.text)

_jobID = json_resp.get('JobId')

print("Status is completed the JobID is "+ str(_jobID)+ "\n")


# Check if the response contains Notes and print it.

if len(json_resp.get('Notes')) > 0:

print("Notes:\n======================================")

for var in json_resp.get('Notes'):

print(var)

print("======================================\n")

_getResultURL=str("https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('"+_jobID+"')/$value")

print("Retrieve result from "+_getResultURL)

resp=get(_getResultURL,headers=_header,stream=True)

#Write Output to file.

outputfilepath = str(ticker+str(os.getpid()) + '.csv.gz')

if resp.status_code==200:

with open(outputfilepath, 'wb') as f:

f.write(resp.raw.read())

print("Write output to "+outputfilepath+" completed\n")

print("Below is sample data from "+ outputfilepath)

#Read data from csv.gz and shows output from dataframe head() and tail()

df=pd.read_csv(outputfilepath,compression='gzip')

print(df.head())

print("....")

print(df.tail())

requestBody ={

"ExtractionRequest": {

"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",

"ContentFieldNames": [

"Ask Price",

"Ask Size",

"Bid Price",

"Bid Size",

"Number of Buyers",

"Number of Sellers",

],

"IdentifierList": {

"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",

"InstrumentIdentifiers": [

{ "Identifier": ticker, "IdentifierType": "Ric" }

]

},

"Condition": {

"View": "NormalizedLL2",

"NumberOfLevels": 10,

"MessageTimeStampIn": "GmtUtc",

"ReportDateRangeType": "Range",

"QueryStartDate": "2020-04-26T05:00:00.000Z",

"QueryEndDate": "2020-04-26T10:00:00.000Z",

"DisplaySourceRIC": True

}

}

}

ExtractRaw(RequestNewToken(),requestBody,ticker)



Output:

Send Login request

Get Status from https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x0712dd1988f183b9')

Status:InProgress

Status is completed the JobID is 0x0712dd1988f183b9


Notes:

======================================

Extraction Services Version 13.3.41481 (25b358914bd2), Built Apr 1 2020 23:28:57

User ID: 9015265

Extraction ID: 2000000145144089

Schedule: 0x0712dd1988f183b9 (ID = 0x0000000000000000)

Input List (1 items): (ID = 0x0712dd1988f183b9) Created: 04/27/2020 10:29:12 Last Modified: 04/27/2020 10:29:12

Report Template (6 fields): _OnD_0x0712dd1988f183b9 (ID = 0x0712dd198b0183b9) Created: 04/27/2020 10:28:07 Last Modified: 04/27/2020 10:28:07

Schedule dispatched via message queue (0x0712dd1988f183b9), Data source identifier (0F47E0BA5DE047C49841ACBB5DEDF386)

Schedule Time: 04/27/2020 10:28:09

Processing started at 04/27/2020 10:28:09

Processing completed successfully at 04/27/2020 10:29:12

Extraction finished at 04/27/2020 10:29:12 UTC, with servers: tm07n03, TRTH (52.44 secs)

Instrument <RIC,LCOM0> expanded to 1 RIC: LCOM0.

Total instruments after instrument expansion = 1

Manifest: #RIC,Domain,Start,End,Status,Count

Manifest: LCOM0,Market Price,,,Inactive,0


======================================


Retrieve result from https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('0x0712dd1988f183b9')/$value

Write output to LCOM014912.csv.gz completed


Below is sample data from LCOM014912.csv.gz

Empty DataFrame

Columns: [#RIC, Domain, Date-Time, GMT Offset, Type, L1-BidPrice, L1-BidSize, L1-BuyNo, L1-AskPrice, L1-AskSize, L1-SellNo, L2-BidPrice, L2-BidSize, L2-BuyNo, L2-AskPrice, L2-AskSize, L2-SellNo, L3-BidPrice, L3-BidSize, L3-BuyNo, L3-AskPrice, L3-AskSize, L3-SellNo, L4-BidPrice, L4-BidSize, L4-BuyNo, L4-AskPrice, L4-AskSize, L4-SellNo, L5-BidPrice, L5-BidSize, L5-BuyNo, L5-AskPrice, L5-AskSize, L5-SellNo, L6-BidPrice, L6-BidSize, L6-BuyNo, L6-AskPrice, L6-AskSize, L6-SellNo, L7-BidPrice, L7-BidSize, L7-BuyNo, L7-AskPrice, L7-AskSize, L7-SellNo, L8-BidPrice, L8-BidSize, L8-BuyNo, L8-AskPrice, L8-AskSize, L8-SellNo, L9-BidPrice, L9-BidSize, L9-BuyNo, L9-AskPrice, L9-AskSize, L9-SellNo, L10-BidPrice, L10-BidSize, L10-BuyNo, L10-AskPrice, L10-AskSize, L10-SellNo]

Index: []

....

Empty DataFrame

Columns: [#RIC, Domain, Date-Time, GMT Offset, Type, L1-BidPrice, L1-BidSize, L1-BuyNo, L1-AskPrice, L1-AskSize, L1-SellNo, L2-BidPrice, L2-BidSize, L2-BuyNo, L2-AskPrice, L2-AskSize, L2-SellNo, L3-BidPrice, L3-BidSize, L3-BuyNo, L3-AskPrice, L3-AskSize, L3-SellNo, L4-BidPrice, L4-BidSize, L4-BuyNo, L4-AskPrice, L4-AskSize, L4-SellNo, L5-BidPrice, L5-BidSize, L5-BuyNo, L5-AskPrice, L5-AskSize, L5-SellNo, L6-BidPrice, L6-BidSize, L6-BuyNo, L6-AskPrice, L6-AskSize, L6-SellNo, L7-BidPrice, L7-BidSize, L7-BuyNo, L7-AskPrice, L7-AskSize, L7-SellNo, L8-BidPrice, L8-BidSize, L8-BuyNo, L8-AskPrice, L8-AskSize, L8-SellNo, L9-BidPrice, L9-BidSize, L9-BuyNo, L9-AskPrice, L9-AskSize, L9-SellNo, L10-BidPrice, L10-BidSize, L10-BuyNo, L10-AskPrice, L10-AskSize, L10-SellNo]

Index: []

pythontick-history-rest-apimarket-depth
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

· Write an Answer
Upvotes
Accepted
11.3k 26 9 14

Hi @varun.divakar,

The following information in the Extraction Notes indicates that there is no active data during the query range. You may try another query range such as 2020-04-20T05:00:00.000Z instead.

Extraction finished at 04/27/2020 10:29:12 UTC, with servers: tm07n03, TRTH (52.44 secs)
Instrument <RIC,LCOM0> expanded to 1 RIC: LCOM0.
Total instruments after instrument expansion = 1
Manifest: #RIC,Domain,Start,End,Status,Count
Manifest: LCOM0,Market Price,,,Inactive,0
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.