For a deeper look into our DataScope Select REST API, look into:

Overview |  Quickstart |  Documentation |  Downloads |  Tutorials

question

Upvotes
Accepted
5 0 0 3

Downloading compressed file on TRTH

I run into out of memory exception when I add new lines to response body by "using (var gzip = new GZipInputStream(streamResponse.Stream))" after "var streamResponse = ExtractionsContext.GetReadStream(extractionResult);".

I tried to get L1 data for one day for two chain ric "0#HSI:", "0#HCEI:" and got the above issue

is it possible that I just download the compressed file only? How can I do that? I will decompress the file in later stages.

Or is there a better option regarding my above issue?


It does work if I submit a request with a smaller number of identifiers. But this is not the ideal solution.

Though I will not run into “out of memory” exception, the needed time is still way too long.

dss-rest-apidssdatascope-selecttick-history-rest-api
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

Upvotes
Accepted
56.2k 145 45 65

@Mohamed.Hisham

Please refer to the .Net SDK Tutorial 5: On Demand extraction, file IO tutorial.

It demonstrates how to set AutomaticDecompression to false and save the data in compressed format.

//Direct download from AWS ?
if (awsDownload) { extractionsContext.DefaultRequestHeaders.Add("x-direct-download", "true"); };

DssStreamResponse streamResponse = extractionsContext.GetReadStream(extractionResult);
using (FileStream fileStream = File.Create(dataOutputFile))
    streamResponse.Stream.CopyTo(fileStream);

//Reset header after direct download from AWS ?
if (awsDownload) { extractionsContext.DefaultRequestHeaders.Remove("x-direct-download"); };

Console.WriteLine("Saved the compressed data file to disk:\n" + gzipDataOutputFile);
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.