Composite extraction request example not working in Python

I am trying to fetch Composite extraction request data from Data Scope Select using Python and trying to generate gzip csv file but i am only getting Authentication back as result and not getting the result data for Instument. I am trying to test with one ISIN currently.
Currently i am getting error as :
HTTP status of the response: 400
NameError: name 'jobId' is not defined
I have also attached my Python example just to know if i am providing something wrong in the request.
Do we already have any working Python example for Composite extraction request ?
Below is how i am providing Composite extraction request in Python :
requestUrl = reqStart + '/Extractions/ExtractRaw'
requestHeaders = {
"Prefer": "respond-async",
"Content-Type": "application/json",
"Authorization": "token " + token
}
requestBody = {
"ExtractionRequest": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.CompositeExtractionRequest",
"ContentFieldNames": [
"ISIN", "Announcement Date"
],
"IdentifierList": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [{
"Identifier": "DE000NLB1KJ5",
"IdentifierType": "Isin"
}]
},
"Condition": {
}
}
}
r2 = requests.post(requestUrl, json=requestBody, headers=requestHeaders)
r3 = r2
# Display the HTTP status of the response
# Initial response status (after approximately 30 seconds wait) is usually 202
status_code = r2.status_code
log("HTTP status of the response: " + str(status_code))
# If status is 202, display the location url we received, and will use to poll the status of the extraction request:
if status_code == 202:
requestUrl = r2.headers["location"]
log('Extraction is not complete, we shall poll the location URL:')
log(str(requestUrl))
requestHeaders = {
"Prefer": "respond-async",
"Content-Type": "application/json",
"Authorization": "token " + token
}
# As long as the status of the request is 202, the extraction is not finished;
# we must wait, and poll the status until it is no longer 202:
while (status_code == 202):
log('As we received a 202, we wait 30 seconds, then poll again (until we receive a 200)')
time.sleep(30)
r3 = requests.get(requestUrl, headers=requestHeaders)
status_code = r3.status_code
log('HTTP status of the response: ' + str(status_code))
# When the status of the request is 200 the extraction is complete;
# we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):
if status_code == 200:
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
notes = r3Json["Notes"]
log('Extraction notes:\n' + notes[0])
# If instead of a status 200 we receive a different status, there was an error:
if status_code != 200:
log('An error occured. Try to run this cell again. If it fails, re-run the previous cell.\n')
requestUrl = requestUrl = reqStart + "/Extractions/RawExtractionResults"
# AWS requires an additional header: X-Direct-Download
if useAws:
requestHeaders = {
"Prefer": "respond-async",
"Content-Type": "text/plain",
"Accept-Encoding": "gzip",
"X-Direct-Download": "true",
"Authorization": "token " + token
}
else:
requestHeaders = {
"Prefer": "respond-async",
"Content-Type": "text/plain",
"Accept-Encoding": "gzip",
"Authorization": "token " + token
}
Best Answer
-
It returned 400 (Bad Request) with the following error.
HTTP status of the response: 400
{ "error": { "message": "Malformed request payload: For the property name \"MessageTimeStampIn\" in the JSON request the value could not be parsed successfully. Please check the casing or spelling of the property." } }The condition of CompositeExtractionRequest doesn't have the MessageTimeStampIn property.
Referring to the API Reference Tree, the condition of CompositeExtractionRequest contains one property which is the ScalableCurrency parameter.
0
Answers
-
First, you need to get the JobId from the JSON request when the status code is 200.
{
"@odata.context": "https://selectapi.datascope.refinitiv.com/RestApi/v1/$metadata#RawExtractionResults/$entity",
"JobId": "0x07de743d5d1d7dc7",
"Notes": [
"Extraction Servi# When the status of the request is 200 the extraction is complete;
# we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):
if status_code == 200:
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
notes = r3Json["Notes"]
jobId = r3Json["JobId"]
log('Extraction notes:\n' + notes[0])Then, use JobId to create a request URL (/Extractions/RawExtractionResults) to download a file.
# Advisory: avoid incomplete output - decompress then download
requestUrl = requestUrl = reqStart + "/Extractions/RawExtractionResults('"+jobId+"')/$value"
# AWS requires an additional header: X-Direct-Download
if useAws:
requestHeaders = {0 -
@Jirapongse i already tried with the JobID but i am getting error as
NameError: name 'jobId' is not defined
I have attached the code : composite_example.txt
When i just replace request:
@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.CompositeExtractionRequest
with:
@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest
it generates the JobID and always the code works fine with th result data for Instrument...
0 -
@Jirapongse Now i have edited the condition and just use below parameter:
"Condition": {
"ScalableCurrency": "true"
}But getting error as :
HTTP status of the response: 403
NameError: name 'jobId' is not defined
I think you can also test my python code as it is just replacing the username and password. Did the other parameter are correct in condition ?
0 -
@zoya faberov @Gurpreet Do you know what is wrong with the request ?0
-
Have you applied the changes as mentioned by @Jirapongse !!
requestUrl = requestUrl = reqStart + "/Extractions/RawExtractionResults('"+jobId+"')/$value"
0 -
0
-
0
-
@Gurpreet Yes i have uploaded now..you can test as it is by replacing username and password and just some url which i am passing to generate gzip file: composite_example.txt
0 -
@Gurpreet @Jirapongse Now the code is running
Actually the problem was with my refinitiv ID which doesnt has permission to fetch this information for composite report...I tried with another refinitiv ID and the code works fine....i still have one question ...How can is pass multiple Identifier the request ? Currently i have 1000 Isin which i want to pass in the request....Thanks for your help
0 -
Yes, it can be done. InstrumentIdentifiers is an array, just supply more elements into it.
"InstrumentIdentifiers": [{
"Identifier": "IBM.N",
"IdentifierType": "Isin"
},
...
]0 -
0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 690 Datastream
- 1.4K DSS
- 629 Eikon COM
- 5.2K Eikon Data APIs
- 11 Electronic Trading
- 1 Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 255 ETA
- 559 WebSocket API
- 39 FX Venues
- 15 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 25 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 280 Open PermID
- 45 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 23 RDMS
- 2K Refinitiv Data Platform
- 717 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 27 DACS Station
- 121 Open DACS
- 1.1K RFA
- 106 UPA
- 194 TREP Infrastructure
- 229 TRKD
- 918 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 95 Workspace SDK
- 11 Element Framework
- 5 Grid
- 19 World-Check Data File
- 1 Yield Book Analytics
- 48 中文论坛