question

Upvotes
Accepted
23 9 14 23

How to do Proxy setting in RTH REST API using Python

I am getting below error now in python when i try to use for RTH REST Api:

TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

Is it related to proxy ? how to provide proxy setting in python in order to do extraction for TRTH REST Api ?

Below is part of the code which is almost same like given in tutorial :

# Imports:
import requests
import json
import shutil
import time
import urllib3
import gzip

# ====================================================================================
# Step 1: token request

reqStart = "https://selectapi.datascope.refinitiv.com/RestApi/v1"
requestUrl = reqStart + "/Authentication/RequestToken"

requestHeaders = {
    "Prefer": "respond-async",
    "Content-Type": "application/json"
}

requestBody = {
    "Credentials": {
        "Username": myUsername,
        "Password": myPassword
    }
}

r1 = requests.post(requestUrl, json=requestBody, headers=requestHeaders)

if r1.status_code == 200:
    jsonResponse = json.loads(r1.text.encode('ascii', 'ignore'))
    token = jsonResponse["value"]
    print('Authentication token (valid 24 hours):')
    print(token)
else:
    print('Replace myUserName and myPassword with valid credentials, then repeat the request')

# In[7]:

# Step 2: send an on demand extraction request using the received token

requestUrl = reqStart + '/Extractions/ExtractRaw'

requestHeaders = {
    "Prefer": "respond-async",
    "Content-Type": "application/json",
    "Authorization": "token " + token
}

requestBody = {
    "ExtractionRequest": {
        "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryRawExtractionRequest",
        "IdentifierList": {
            "@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
            "InstrumentIdentifiers": [{
                "Identifier": "CARR.PA",
                "IdentifierType": "Ric"
            }]
        },
        "Condition": {
            "MessageTimeStampIn": "GmtUtc",
            "ReportDateRangeType": "Range",
            "QueryStartDate": "2016-09-29T12:00:00.000Z",
            "QueryEndDate": "2016-09-29T12:10:00.000Z",
            "ExtractBy": "Ric",
            "SortBy": "SingleByRic",
            "DomainCode": "MarketPrice",
            "DisplaySourceRIC": "true"
        }
    }
}

r2 = requests.post(requestUrl, json=requestBody, headers=requestHeaders)
r3 = r2

# Display the HTTP status of the response
# Initial response status (after approximately 30 seconds wait) is usually 202
status_code = r2.status_code
print("HTTP status of the response: " + str(status_code))

# In[8]:

# Step 3: if required, poll the status of the request using the received location URL.
# Once the request has completed, retrieve the jobId and extraction notes.

# If status is 202, display the location url we received, and will use to poll the status of the extraction request:
if status_code == 202:
    requestUrl = r2.headers["location"]
    print('Extraction is not complete, we shall poll the location URL:')
    print(str(requestUrl))

    requestHeaders = {
        "Prefer": "respond-async",
        "Content-Type": "application/json",
        "Authorization": "token " + token
    }

# As long as the status of the request is 202, the extraction is not finished;
# we must wait, and poll the status until it is no longer 202:
while (status_code == 202):
    print('As we received a 202, we wait 30 seconds, then poll again (until we receive a 200)')
    time.sleep(30)
    r3 = requests.get(requestUrl, headers=requestHeaders)
    status_code = r3.status_code
    print('HTTP status of the response: ' + str(status_code))

# When the status of the request is 200 the extraction is complete;
# we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):
if status_code == 200:
    r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
    jobId = r3Json["JobId"]
    print('\njobId: ' + jobId + '\n')
    notes = r3Json["Notes"]
    print('Extraction notes:\n' + notes[0])

# If instead of a status 200 we receive a different status, there was an error:
if status_code != 200:
    print('An error occured. Try to run this cell again. If it fails, re-run the previous cell.\n')
pythontick-history-rest-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

· Write an Answer
Upvotes
Accepted
22.1k 59 14 21

Hello @rahul.deshmukh ,

If your network uses proxy server and you haven't explicitly provided proxy information to python modules, then your application will get connection timeouts.

To provide proxy server information insert these lines at the top of Step 1 in the code:

import os
os.environ["HTTPS_PROXY"] = "http://YOUR_PROXY_SERVER_HOST:PROXY_PORT"
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

@Gurpreet If for example i have proxy as proxy1.xxx:8080 then should i do something like below ? :

import os
# Step 1: token request
os.environ["HTTPS_PROXY"] = "http://proxy1.xxx:8080"
reqStart = "https://selectapi.datascope.refinitiv.com/RestApi/v1"
requestUrl = reqStart + "/Authentication/RequestToken"
yes. that is correct.

@Gurpreet thanks its working now.Is it fine if i asked few other question regarding RTH REST Api...I am using the code from the Refinitiv Tutorial rth-ondemand-raw20210827 (1).zip

rth-ondemand-raw20210827-1.zip (3.2 KiB)

@Gurpreet

As we are expecting to extract huge amount of data so i have below question in terms of implementation of RTH REST:

1) As we have 3000 Identifiers list what is the best practice in terms of performance, stability and without loss of data to send this Identifier request to the RTH REST ? Should we send the Identifier request one by one to the RTH REST or bulk Identifier request for example 500 Identifier request in one RTH REST request and then another 500 Identifier request and so on.?

2) In demo example i see comments in some parts where it says the example is for demo purpose only and in production it will create problem...what does this actually mean and where do we need to update the code?

3) As we are expecting huge volume of data from RTH REST from the 3000 Identifier list, we are thinking to send either one by one Identifier request or bulk Identifier(500 in one request) to the RTH REST and then get the data from this REST, store the data in the csv zip file like given in demo example and then load the data into database table....And then do another Identifier request to the REST, just delete the previous csv zip file, fetch the data from REST and load into database table and the process goes on until all the 3000 Identifier request has been sent to the REST...Is this good approach considering Volumne of data and performance?

@rahul.deshmukh , I would advise that you ask these questions in a new post. It will likely get lost in here and we like to keep answers to the point.


In general, the best practice is described in the sample code and tutorials.

@Gurpreet thanks i will first try few use case with step by step pasing an Identifier and also bulk Identifier pasing to RTH REST then i will new question after this...normally we are going to extract huge volume of data so performance is the main factor...one question can we get Instrument list for RTH REST ?

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.