question

Upvotes
Accepted
3 1 1 3

TickHistoryMarketDepthExtractionRequest

I requested market depths with below code. Although there are no error msg, the file is empty. I tried different time but no luck.


#Step 2: send an on demand extraction request using the received token


requestUrl='https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/ExtractRaw'


requestHeaders={

"Prefer":"respond-async",

"Content-Type":"application/json",

"Authorization": "token " + token

}


requestBody={

"ExtractionRequest": {

"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",

"ContentFieldNames": [

"Ask Price",

"Ask Size",

"Bid Price",

"Bid Size",

"Domain",

"History End",

"History Start",

"Instrument ID",

"Instrument ID Type",

"Number of Buyers",

"Number of Sellers",

"Sample Data"

],

"IdentifierList": {

"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",

"InstrumentIdentifiers": [

{

"Identifier": "6501.T",

"IdentifierType": "Ric"

}

]

},

"Condition": {

"View": "NormalizedLL2",

"NumberOfLevels": 10,

"MessageTimeStampIn": "GmtUtc",

"ReportDateRangeType": "Range",

"QueryStartDate": "2022-06-28T05:00:00.000Z",

"QueryEndDate": "2022-06-28T05:35:00.000Z",

"DisplaySourceRIC": True

}

}

}


r2 = requests.post(requestUrl, json=requestBody,headers=requestHeaders)


#Display the HTTP status of the response

#Initial response status (after approximately 30 seconds wait) is usually 202

status_code = r2.status_code

print ("HTTP status of the response: " + str(status_code))


#Step 3: if required, poll the status of the request using the received location URL.

#Once the request has completed, retrieve the jobId and extraction notes.


#If status is 202, display the location url we received, and will use to poll the status of the extraction request:

if status_code == 202 :

requestUrl = r2.headers["location"]

print ('Extraction is not complete, we shall poll the location URL:')

print (str(requestUrl))

requestHeaders={

"Prefer":"respond-async",

"Content-Type":"application/json",

"Authorization":"token " + token

}


#As long as the status of the request is 202, the extraction is not finished;

#we must wait, and poll the status until it is no longer 202:

while (status_code == 202):

print ('As we received a 202, we wait 30 seconds, then poll again (until we receive a 200)')

time.sleep(30)

r3 = requests.get(requestUrl,headers=requestHeaders)

status_code = r3.status_code

print ('HTTP status of the response: ' + str(status_code))


#When the status of the request is 200 the extraction is complete;

#we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):

if status_code == 200 :

r3Json = json.loads(r3.text.encode('ascii', 'ignore'))

jobId = r3Json["JobId"]

print ('\njobId: ' + jobId + '\n')

notes = r3Json["Notes"]

print ('Extraction notes:\n' + notes[0])


#If instead of a status 200 we receive a different status, there was an error:

if status_code != 200 :

print ('An error occured. Try to run this cell again. If it fails, re-run the previous cell.\n')


#Step 4: get the extraction results, using the received jobId.

#Decompress the data and display it on screen.

#Skip this step if you asked for a large data set, and go directly to step 5 !


#We also save the data to disk; but note that if you use AWS it will be saved as a GZIP,

#otherwise it will be saved as a CSV !

#This discrepancy occurs because we allow automatic decompression to happen when retrieving

#from RTH, so we end up saving the decompressed contents.


#IMPORTANT NOTE:

#The code in this step is only for demo, to display some data on screen.

#Avoid using this code in production, it will fail for large data sets !

#See step 5 for production code.


requestUrl = "https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value"


#AWS requires an additional header: X-Direct-Download

if useAws:

requestHeaders={

"Prefer":"respond-async",

"Content-Type":"text/plain",

"Accept-Encoding":"gzip",

"X-Direct-Download":"true",

"Authorization": "token " + token

}

else:

requestHeaders={

"Prefer":"respond-async",

"Content-Type":"text/plain",

"Accept-Encoding":"gzip",

"Authorization": "token " + token

}


r4 = requests.get(requestUrl,headers=requestHeaders)

if useAws:

print ('Content response headers (AWS server): type: ' + r4.headers["Content-Type"] + '\n')

#AWS does not set header Content-Encoding="gzip", so the requests call does not decompress.

#We therefore decompress using a separate call (to the gzip library).

uncompressedData = gzip.decompress(r4.content).decode("utf-8")

#We save the original compressed data (to save space):

fileName = filePath + fileNameRoot + ".step4.csv.gz"

print ('Saving compressed data to file: ' + fileName + ' ... please be patient')

else:

print ('Content response headers (TRTH server): type: ' + r4.headers["Content-Type"] + ' - encoding: ' + r4.headers["Content-Encoding"] + '\n')

#The requests call automatically decompresses the data, if header Content-Encoding="gzip".

uncompressedData = r4.text

#We save the uncompressed data (because it was automatically decompressed):

fileName = filePath + fileNameRoot + ".step4.csv"

print ('Saving uncompressed data to file: ' + fileName + ' ... please be patient')


#Save to file:

#with open(fileName, 'wb') as fd:

# for chunk in r4.iter_content(chunk_size=1024):

# fd.write(chunk)

#fd.close

#print ('Finished saving data to file:' + fileName + '\n')


#Display data:

print ('Decompressed data:\n' + uncompressedData)


#Note: variable uncompressedData stores all the data.

#This is not a good practice, that can lead to issues with large data sets.

#We only use it here as a convenience for the demo, to keep the code very simple.


#Step 5: get the extraction results, using the received jobId.

#We also save the compressed data to disk, as a GZIP.

#We only display a few lines of the data.


#IMPORTANT NOTE:

#This code is much better than that of step 4; it should not fail even with large data sets.

#If you need to manipulate the data, read and decompress the file, instead of decompressing

#data from the server on the fly.

#This is the recommended way to proceed, to avoid data loss issues.

#For more information, see the related document:

# Advisory: avoid incomplete output - decompress then download


requestUrl = "https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value"


#AWS requires an additional header: X-Direct-Download

if useAws:

requestHeaders={

"Prefer":"respond-async",

"Content-Type":"text/plain",

"Accept-Encoding":"gzip",

"X-Direct-Download":"true",

"Authorization": "token " + token

}

else:

requestHeaders={

"Prefer":"respond-async",

"Content-Type":"text/plain",

"Accept-Encoding":"gzip",

"Authorization": "token " + token

}


r5 = requests.get(requestUrl,headers=requestHeaders,stream=True)

#Ensure we do not automatically decompress the data on the fly:

r5.raw.decode_content = False

if useAws:

print ('Content response headers (AWS server): type: ' + r5.headers["Content-Type"] + '\n')

#AWS does not set header Content-Encoding="gzip".

else:

print ('Content response headers (TRTH server): type: ' + r5.headers["Content-Type"] + ' - encoding: ' + r5.headers["Content-Encoding"] + '\n')


#Next 2 lines display some of the compressed data, but if you uncomment them save to file fails

#print ('20 bytes of compressed data:')

#print (r5.raw.read(20))


print ('Saving compressed data to file:' + fileName + ' ... please be patient')

fileName = filePath + fileNameRoot + ".step5.csv.gz"

chunk_size = 1024

rr = r5.raw

with open(fileName, 'wb') as fd:

shutil.copyfileobj(rr, fd, chunk_size)

fd.close


print ('Finished saving compressed data to file:' + fileName + '\n')


#Now let us read and decompress the file we just created.

#For the demo we limit the treatment to a few lines:

maxLines = 10

print ('Read data from file, and decompress at most ' + str(maxLines) + ' lines of it:')


uncompressedData = ""

count = 0

with gzip.open(fileName, 'rb') as fd:

for line in fd:

dataLine = line.decode("utf-8")

#Do something with the data:

print (dataLine)

uncompressedData = uncompressedData + dataLine

count += 1

if count >= maxLines:

break

fd.close()


#Note: variable uncompressedData stores all the data.

#This is not a good practice, that can lead to issues with large data sets.

#We only use it here as a convenience for the next step of the demo, to keep the code very simple.

#In production one would handle the data line by line (as we do with the screen display)

trth-rest-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hello @aya.nakamura ,

Thank you for your participation in the forum.

Is the reply below satisfactory in resolving your query?

If yes please click the 'Accept' text next to the reply. This will guide all community members who have a similar question. Otherwise please post again offering further insight into your question.

Thanks,

-AHS

Please be informed that a reply has been verified as correct in answering the question, and has been marked as such.

Thanks,
AHS

Upvotes
Accepted
79.2k 251 52 74

@aya.nakamura

Are you using this request?

{
    "ExtractionRequest": {
        "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
        "ContentFieldNames": [
            "Ask Price",
            "Ask Size",
            "Bid Price",
            "Bid Size",
            "Domain",
            "History End",
            "History Start",
            "Instrument ID",
            "Instrument ID Type",
            "Number of Buyers",
            "Number of Sellers",
            "Sample Data"
        ],
        "IdentifierList": {
            "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
            "InstrumentIdentifiers": [
                {
                    "Identifier": "6501.T",
                    "IdentifierType": "Ric"
                }
            ]
        },
        "Condition": {
            "View": "NormalizedLL2",
            "NumberOfLevels": 10,
            "MessageTimeStampIn": "GmtUtc",
            "ReportDateRangeType": "Range",
            "QueryStartDate": "2022-06-28T05:00:00.000Z",
            "QueryEndDate": "2022-06-28T05:35:00.000Z",
            "DisplaySourceRIC": true
        }
    }
}

If yes, please contact the Refinitiv Tick History support team directly via MyRefinitiv to verify the problem. Please also share the request message and the content in Notes with the support team.

1657794145151.png


1657794145151.png (13.6 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Thank you very much. Will check.
Upvote
79.2k 251 52 74

@aya.nakamura

I used the same request message in Postman and am able to get the data properly.

{
    "ExtractionRequest": {
        "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
        "ContentFieldNames": [
            "Ask Price",
            "Ask Size",
            "Bid Price",
            "Bid Size",
            "Domain",
            "History End",
            "History Start",
            "Instrument ID",
            "Instrument ID Type",
            "Number of Buyers",
            "Number of Sellers",
            "Sample Data"
        ],
        "IdentifierList": {
            "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
            "InstrumentIdentifiers": [
                {
                    "Identifier": "6501.T",
                    "IdentifierType": "Ric"
                }
            ]
        },
        "Condition": {
            "View": "NormalizedLL2",
            "NumberOfLevels": 10,
            "MessageTimeStampIn": "GmtUtc",
            "ReportDateRangeType": "Range",
            "QueryStartDate": "2022-06-28T05:00:00.000Z",
            "QueryEndDate": "2022-06-28T05:35:00.000Z",
            "DisplaySourceRIC": true
        }
    }
}

The output is:

1657524334581.png

You can check the Notes to verify the status of the extraction.

if status_code == 200 :
   r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
   jobId = r3Json["JobId"]
   print ('\njobId: ' + jobId + '\n') 
   notes = r3Json["Notes"]
   print ('Extraction notes:\n' + notes[0])

If the data can be extracted properly, you will see this kind of information in the Notes.

"Notes": [
        "Extraction Services Version 16.0.43633 (806c08a4ae8f), Built May  9 2022 17:14:12\nUser ID: 9008895\nExtraction ID: 2000000419751957\nCorrelation ID: CiD/9008895/0x0000000000000000/REST API/EXT.2000000419751957\nSchedule: 0x0815a19d64ce08b7 (ID = 0x0000000000000000)\nInput List (1 items):  (ID = 0x0815a19d64ce08b7) Created: 07/11/2022 08:21:38 Last Modified: 07/11/2022 08:21:38\nReport Template (12 fields): _OnD_0x0815a19d64ce08b7 (ID = 0x0815a19d64ee08b7) Created: 07/11/2022 08:20:34 Last Modified: 07/11/2022 08:20:34\nSchedule dispatched via message queue (0x0815a19d64ce08b7), Data source identifier (7F546751BCDC4C189DF2BB249641EB13)\nSchedule Time: 07/11/2022 08:20:35\nProcessing started at 07/11/2022 08:20:35\nProcessing completed successfully at 07/11/2022 08:21:39\nExtraction finished at 07/11/2022 07:21:39 UTC, with servers: tm01n03, TRTH (54.285 secs)\nInstrument <RIC,6501.T> expanded to 1 RIC: 6501.T.\nTotal instruments after instrument expansion = 1\n\nQuota Message: INFO: Tick History Cash Quota Count Before Extraction: 49199; Instruments Approved for Extraction: 1; Tick History Cash Quota Count After Extraction: 49199, 9839.8% of Limit; Tick History Cash Quota Limit: 500\nManifest: #RIC,Domain,Start,End,Status,Count\nManifest: 6501.T,Market Price,2022-06-28T04:00:00.081024428Z,2022-06-28T04:34:59.624091019Z,Active,6352\n"
    ]

1657524334581.png (62.7 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

@Jirapongse Thank you very much. Below is my notes file. It seems like the last two sentence seems to be error?


Extraction notes:
Extraction Services Version 16.0.43633 (806c08a4ae8f), Built May  9 2022 17:14:12
User ID: 9006461
Extraction ID: 2000000421158857
Correlation ID: CiD/9006461/0x0000000000000000/REST API/EXT.2000000421158857
Schedule: 0x0816d62fa1ee0af7 (ID = 0x0000000000000000)
Input List (1 items):  (ID = 0x0816d62fa1ee0af7) Created: 2022/07/14 13:47:31 Last Modified: 2022/07/14 13:47:31
Report Template (12 fields): _OnD_0x0816d62fa1ee0af7 (ID = 0x0816d62fa20e0af7) Created: 2022/07/14 13:41:28 Last Modified: 2022/07/14 13:41:28
Schedule dispatched via message queue (0x0816d62fa1ee0af7), Data source identifier (92D9339D81E54FD2B524C4B6EEC5416F)
Schedule Time: 2022/07/14 13:41:29
Processing started at 2022/07/14 13:41:30
Processing completed successfully at 2022/07/14 13:47:32
Extraction finished at 2022/07/14 04:47:32 UTC, with servers: tm03n02, TRTH (55.122 secs)
Instrument <RIC,6501.T> expanded to 1 RIC: 6501.T.
Total instruments after instrument expansion = 1

Manifest: #RIC,Domain,Start,End,Status,Count
Manifest: 6501.T,Market Price,,,Inactive,0

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.