Hi All,
I believe I am not using best practices when processing the response. for ex., chunking, paging, or streaming. Hence my application goes out of memory (breaching 500MB). I would like to get your advice on how to avoid an oom situation by processing the data efficiently.
I tried looking at the examples on pages: https://selectapi.datascope.refinitiv.com/RestApi.Help/Home/KeyMechanisms?ctx=Extractions&tab=0&uid=StreamingJson and https://developers.lseg.com/en/api-catalog/datascope-select/datascope-select-rest-api/download, however, I did not find any examples I can implement for my use case.
Here are the steps in my app using Java 17:
I call the following end point with 6000 instruments to retrieve 6 data points back.
https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/ExtractWithNotes
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.EndOfDayPricingExtractionRequest"
This request times out as expected and when I poll using locationUrl, I end up getting the data I need.
https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='test')
I store this data in a list. Then, I call the following end point with the same 6000 instruments to retrieve 1 data point back. Reason being, this particular data point is only available in this API.
https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/ExtractWithNotes
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.FixedIncomeAnalyticsExtractionRequest",
I store this data in a separate list. Then using a hashmap and for each loop, I merge the requests into a single list of objects, with each object representing data for a unique instrument identified by its instrumentISIN.