question

Upvotes
Accepted
3 1 2 1

Are there any TRTH On Demand limitations on results returned? Only seeing 22K lines returned for a given symbol

dss-rest-apitick-history-rest-apiapi-limits
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
13.7k 26 8 12

@Troy.Pfiffner,

Issue analysis

I made a very similar request in Postman. Request body:

{
  "ExtractionRequest": {
    "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest",
    "ContentFieldNames": [ "Trade - Price", "Trade - Volume" ],
    "IdentifierList": {
      "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList", 
      "InstrumentIdentifiers": [{ "Identifier": "UNG", "IdentifierType": "Ric" }]
    },
    "Condition": {
      "MessageTimeStampIn": "LocalExchangeTime",
      "ApplyCorrectionsAndCancellations": false,
      "ReportDateRangeType": "Range",
      "QueryStartDate": "2018-04-27T11:00:00.000Z",
      "QueryEndDate": "2018-05-24T23:00:00.000Z",
      "DisplaySourceRIC": true
    }
  }
}

After polling the location URL to get the JobId, the delivered data contained slightly more than 100k records, covering the entire query range from 27 April till 24 May.

When I ran your code I got 24332 records, the first and last were:

UNG,Market Price,2018-04-27T06:05:20.697783330-04,Trade,22.84,1200
UNG,Market Price,2018-05-03T10:35:00.733435679-04,Trade,22.08,500

The results stop before the end of the requested range. This is typical when decompressing TRTH downloaded data on the fly with standard libraries. See this advisory for details and explanations.

Solutions

The recommended way to proceed is to save the compressed data from the stream directly into a gzip file, and then read and decompress the data from the file for further treatment.

Your code decompress data on the fly when receiving it, without the intermediary step of saving it to disk. This workflow is not recommended. It fails silently when the data set is above a certain size, due to the use of core libraries which do not support merged files. It is possible (but not recommended) to do on the fly decompression using libraries that handle it reliably, like SharpZipLib.

As stated, on the fly data treatment is not recommended for large data sets, but here is how you could do it. Add the library:

using ICSharpCode.SharpZipLib.GZip;

Change the code to retrieve the data:

if (extractionResult.Status == ThomsonReuters.Dss.Api.Jobs.JobStatus.Completed)
{
    var streamResponse = extractionsContext.GetReadStream(extractionResult.Result);
    {
        using (var gzip = new GZipInputStream(streamResponse.Stream))
        {
            using (var reader = new StreamReader(gzip, Encoding.UTF8))
            {
                var result = reader.ReadLine();
                if (string.IsNullOrEmpty(result))
                    System.Diagnostics.Debug.WriteLine("No raw results returned");
                else
                {
                    System.Diagnostics.Debug.WriteLine(result);
                    int lineCount = 0;
                    for (; !reader.EndOfStream; lineCount++)
                    {
                        result = reader.ReadLine();
                        if (lineCount == 0)
                            System.Diagnostics.Debug.WriteLine(result);
                    }
                    System.Diagnostics.Debug.WriteLine(result);
                    System.Diagnostics.Debug.WriteLine("Records: " + lineCount);
                }
            }
        }
    }
    //Output Notes
    System.Diagnostics.Debug.WriteLine("NOTES:");
    foreach (var note in extractionResult.Result.Notes)
        System.Diagnostics.Debug.WriteLine(note);
}

When I ran this code I got 101772 records, the first and last record were:

UNG,Market Price,2018-04-27T07:00:00.020488185-04,Trade,22.8,1100
UNG,Market Price,2018-05-23T18:30:00.020374970-04,Trade,23.92,0

You will find a whole set of relevant C# code in the .Net SDK Tutorial 5, covering various use cases (the tutorial code is available for download). That tutorial uses Gzip decompression, and SharpZipLib.

On the same topic you might also want to look at this thread.

Side comment

The extraction notes reveal that an embargo was applied, which removed some of the data:

Some data suppressed for release cycle(s) C3, C4, PE 64. Request occurred during embargo. Data currently available through 05/21/2018 20:00:00. See Release Cycle schedule in the user documentation for details.

I get the same type of warning, but it only impacts the last day of the request (today).

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

That's it. Thanks a lot!!

Upvotes
13.7k 26 8 12

@Troy.Pfiffner,

There is no limit on the number of lines per instrument. Existing limits are documented here.

If your result is truncated, it could be the result of decompressing downloaded data on the fly, see this advisory. Ways to avoid that can be seen in our sample codes, available under the downloads tab for C#, Java and Python.

If these hints do not help, please give us more details on:

  • The request you make (request type, date range, instrument, and other parameters)
  • How you retrieve the data (ideally by attaching the source code you are using, if possible, or at least the relevant extracts).
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
3 1 2 1

Thanks. Attached code.txt

Also here are the notes

User ID: 9017767

Extraction ID: 2000000028651689

Schedule: 0x062fd9e5e03b2f86 (ID = 0x0000000000000000)

Input List (1 items): (ID = 0x062fd9e5e03b2f86) Created: 05/22/2018 16:32:59 Last Modified: 05/22/2018 16:32:59

Report Template (2 fields): _OnD_0x062fd9e5e03b2f86 (ID = 0x062fd9e5e51b2f86) Created: 05/22/2018 16:30:10 Last Modified: 05/22/2018 16:30:10

Schedule dispatched via message queue (0x062fd9e5e03b2f86), Data source identifier (B34B43CE46FC481AB9819A2A2DC9DC00)

Schedule Time: 05/22/2018 16:30:12

Processing started at 05/22/2018 16:30:12

Processing completed successfully at 05/22/2018 16:32:59

Extraction finished at 05/22/2018 20:32:59 UTC, with servers: tm02n03, TRTH (142.894 secs)

Instrument <RIC,UNG> expanded to 1 RIC: UNG.

Total instruments after instrument expansion = 1

Range Query from 2018-04-26T00:30:00.847 to 2018-05-23T00:30:00.847 (UTC)

(RIC,UNG,PCQ) Some data suppressed for release cycle(s) C3, C4, PE 64. Request occurred during embargo. Data currently available through 05/21/2018 20:00:00. See Release Cycle schedule in the user documentation for details.

Quota Message: INFO: Tick History Cash Quota Count Before Extraction: 30; Instruments Approved for Extraction: 1; Tick History Cash Quota Count After Extraction: 30, 1.15384615384615% of Limit; Tick History Cash Quota Limit: 2600

Manifest: #RIC,Domain,Start,End,Status,Count

Manifest: UNG,Market Price,2018-04-26T09:34:06.914355014Z,2018-05-21T22:59:22.017604763Z,Active,97029


code.txt (3.9 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.