Discover Refinitiv
MyRefinitiv Refinitiv Perspectives Careers
Created with Sketch.
All APIs Questions & Answers  Register |  Login
Ask a question
  • Questions
  • Tags
  • Badges
  • Unanswered
Search:
  • Home /
  • TRTH /
avatar image
Question by YK_deprecated_1 · Jul 07, 2017 at 06:17 AM · pythontick-history-rest-apipricing

TRTH: Retrieving range data using Time and Sales Data

Hi there,

I have a problem with extracting data from Tick History

I specified the range in the report request but couldn't retrieve all data. How can I retrieve all data I wrote in the code below? Any help would be appreciated.

Thank you,

body_data = json.dumps({
    "ExtractionRequest": {
        "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest",
        "ContentFieldNames":[
            "Quote - Bid Price",
            "Quote - Bid Size",
            "Quote - Ask Price",
            "Quote - Ask Size"
 
        ],
        "IdentifierList": {
            "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
            "InstrumentIdentifiers": [ { "Identifier": "JNIc1", "IdentifierType": "Ric" } ],
            "ValidationOptions": None,
            "UseUserPreferencesForValidationOptions": False
        },
        "Condition": {
            "MessageTimeStampIn": "",
            "ReportDateRangeType": "Range",
            "QueryStartDate":"2017-01-03T23:45:00.000Z",
            "QueryEndDate": "2017-01-06T20:30:00.000Z",
            "DisplaySourceRIC": True
        }
    }
})

responseGet = requests.post( "https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw",
                              data = body_data,
                              headers = header2)

res_json = responseGet.json()
job_id = res_json['JobId']
response_obj = requests.get( "https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('{0}')/$value".format(job_id),
                            headers = header2, stream=True)

gzip_file = "jnic1.csv"


with open(gzip_file, 'wb') as f:
    for data in response_obj.raw.stream(decode_content=True):
        f.write(data)


People who like this

0 Show 2
Comment
10 |1500 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

avatar image
REFINITIV
veerapath.rungruengrayubkul ♦♦ · Jul 07, 2017 at 07:43 AM 0
Share

@YK

With the code, the extracted result starts at 2017-01-04T08:45:00.077934619+09

and ends at 2017-01-04T08:45:00.077934619+09. Do you receive the same result?

Also, how could you verify if all data is not received? Could you please elaborate?

avatar image
YK_deprecated_1 veerapath.rungruengrayubkul ♦♦ · Jul 10, 2017 at 05:31 AM 0
Share

@veerapath.rungruengrayubkul

Thanks your help.

I receive the same result by ur code.

I could not get the whole data because it didn't include the data I specified in the condition.

2 Replies

  • Sort: 
avatar image
REFINITIV
Best Answer
Answer by Christiaan Meihsl · Jul 10, 2017 at 04:11 AM

@YK, am I right in guessing that you are only receiving the first part of the expected data ? If yes, if you run the query several times (try at least 10 times), is the number of lines of received data always the same, or does it vary ? If yes, this might be related to a similar issue we saw in Java with libraries that were not robust enough and dropped the stream when decoding data on the fly.

I see you set decode_content=True. If I am not mistaken, that means the file will be decompressed before saving to disk. Can you try setting it to false ?

Comment

People who like this

0 Show 1 · Share
10 |1500 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

avatar image
xi.yang · Mar 07, 2018 at 07:19 PM 0
Share

@Christiaan MeihslI found the example of getting the latest schedule files or venue files with decode_content=True. I am wondering why it works for that and why should we treat it differently compared to the on demand request for the decode_content

avatar image
Answer by YK_deprecated_1 · Jul 10, 2017 at 05:37 AM

@Christiaan Meihsl

After setting decode to false, I can get all data in a gzip file.

I still can't figure out why I cannot get the whole data by using decode_content = True.. Does it simply overflow the capacity of API? or some other reasons..

but it's ok it clears.

Thank you!

Comment

People who like this

0 Show 1 · Share
10 |1500 characters needed characters left characters exceeded
▼
  • Viewable by all users
  • Viewable by moderators
  • Viewable by moderators and the original poster
  • Advanced visibility
Viewable by all users

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

avatar image
REFINITIV
Christiaan Meihsl ♦♦ · Jul 10, 2017 at 07:48 AM 1
Share

@YK, in the similar issue I mentioned with the Java libraries, we observed that when the data set was small the decoding worked fine. But it started failing when the data set was larger. I guess many such libraries were tested on fairly small data sets, which correspond to the common use cases. With TRTH we are often handling large data sets, which is somewhat atypical, and it seems some libraries were just not built for that.

Glad I helped solve the issue.

Watch this question

Add to watch list
Add to your watch list to receive emailed updates for this question. Too many emails? Change your settings >
9 People are following this question.

Related Questions

EOD Implied volatility of equity stock options

Is it possible to download the EOD data for a back date?

THTHv1 - EndOfDay response with different data

TRTH REST API - Could you please provide python API code to retrieve extraction ID and also Python code to download extracted file.

How to get the RowCount as Header part of the endofday extract using DSS Select

  • Copyright
  • Cookie Policy
  • Privacy Statement
  • Terms of Use
  • Anonymous
  • Sign in
  • Create
  • Ask a question
  • Spaces
  • Alpha
  • App Studio
  • Block Chain
  • Bot Platform
  • Connected Risk APIs
  • DSS
  • Data Fusion
  • Data Model Discovery
  • Datastream
  • Eikon COM
  • Eikon Data APIs
  • Electronic Trading
    • Generic FIX
    • Local Bank Node API
    • Trading API
  • Elektron
    • EMA
    • ETA
    • WebSocket API
  • Intelligent Tagging
  • Legal One
  • Messenger Bot
  • Messenger Side by Side
  • ONESOURCE
    • Indirect Tax
  • Open Calais
  • Open PermID
    • Entity Search
  • Org ID
  • PAM
    • PAM - Logging
  • ProView
  • ProView Internal
  • Product Insight
  • Project Tracking
  • RDMS
  • Refinitiv Data Platform
    • Refinitiv Data Platform Libraries
  • Rose's Space
  • Screening
    • Qual-ID API
    • Screening Deployed
    • Screening Online
    • World-Check One
    • World-Check One Zero Footprint
  • Side by Side Integration API
  • TR Knowledge Graph
  • TREP APIs
    • CAT
    • DACS Station
    • Open DACS
    • RFA
    • UPA
  • TREP Infrastructure
  • TRKD
  • TRTH
  • Thomson One Smart
  • Transactions
    • REDI API
  • Velocity Analytics
  • Wealth Management Web Services
  • Workspace SDK
    • Element Framework
    • Grid
  • World-Check Data File
  • 中文论坛
  • Explore
  • Tags
  • Questions
  • Badges