Datascope, ReportExtractions response containing min values
Hi,
We are using the DSS Reuters Rest API to download rates file. We have seen that 'LastWriteTimeUtc' in the result of a call to 'report extractions' has come back with the min value of (01/01/0001 00:00:00 +00:00) for DatetimeOffset for 'LastWriteTimeUtc' for both the .notes and .xml files.
We have seen from the extracted files notes that it took over 4 minutes for processing to be completed.
In a case like this, where files take a long time to be processed, do Reuters populate fields like 'LastWriteTimeUtc' with default values?
-------------------------------------
Edit:
We are making the following calls to download the content of a rates xml file:
1. Authentication:
POST https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken
We pass in credentials and receive a token back which we pass in in all subsequent calls.
2. LastExtraction:
We pass in appropriate schedule id for the given file and we receive a response, from which we take the report extraction id to use in the next call.
3. Files:
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ReportExtractions('111111')/Files
We pass in the extraction id from previous call. We receive the files data associated with last extraction. The files are: the rates file and notes. We spool through the file data and get the xml file id for the full rates xml file.capture1.png
4. Value:
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractedFiles('V111111111=')/$value
We pass in the extracted file id from the previous call. We download the content of the file.
We have noticed on a few occasions that the response we received from the call number 3 (Files) contained fields with default values. One of them was LastWriteTimeUtc, which was set to "01/01/0001 00:00:00 +00:00". We were wondering what can be the cause.
Unfortunately we can't provide any notes as they were overridden by the next extraction and we don't store them. Please let me know if you need any other information.
Answers
-
Is there any chance I can get some captures of the requests you are making for this case?
I want to make sure I understand the use case and timing you are involved in.
Can you provide the notes for the extraction?
0 -
-
You can always email me directly at rick.weyrauch@thomsonreuters.com with specific request data (so you do not have to share confidential data here on the forums).
0 -
I see that this file info data is a bit of a "feature in flight" where there is a legacy model and a new model where by we can always provide this data on each generated file. The problem is that most of the file results being returned today use the legacy model which cannot always fully populate this data (why we needed a new model).
For the time being, this data may be of limited use and ignored. I will have to check with Product Management about when we can be expected to be fully converted to the new model and always have this data provided.
0 -
Hi, Rick asked the product team to review this request. We do not plan to make the change to the converted model anytime soon in 2016. We will try to prioritize the change in our backlog for 2017. As Rick mentioned, this data may be of limited use and can be ignored for now.
0 -
The LastWriteTimeUtc and ReceivedDateUTC are of no great importance to us so these can be ignored as you have said but the size field is. We have seen that when size is '0' we do not receive any file in the final (call 4 see original question).
@Rick Weyrauch you state in your reply that "The problem is that most of the file results being returned today use the legacy model which cannot always fully populate this data (why we needed a new model)." Can you confirm that we are receiving both models and on any given day we can be returned a legacy model which could cause no file to be returned?
We are using 3 different schedules which all point to the creation of the same xml rates file. Yesterday when using call 3(see original question) for these 3 schedules we saw that one of the scheduleIds was returning a legacy model object and no associated rates file while the other 2 are returning the correct objects with an associated rates file. Is this expected behaviour?
0 -
You stated "Unfortunately we can't provide any notes as they were overridden by the next extraction and we don't store them." but I need to stress the importance of the notes. The data contained in the notes are crucial to identifying the proper extraction to review when there are questions about the extraction run of the data it returned. Notes are "your receipt" and so are recommended to be save for at least a few days, or as long as it take your system to raise all concerns.
Alternately, you can review the diagnostic header section of the help for other identifiers that your system can record for these inquires.
Would it be possible for you to add some logging and then provide specific identifiers or notes for each "good" and "bad" request you are commenting on?
In the meantime, I am looking into why you would ever get back a file reference where the .ContentsExist property would be true while at the same time the .Size is 0 and there are no contents.
0 -
Hi @Rick Weyrauch,
Has there been any update on this question?
0 -
This was sent to the poster via email...
Ok, so while we can consider
that we have a bug here, it’s not really a REST API exactly. It is an artifact
of our old FTP roots. This would not have ended well for an pure FTP client
either.You have 3 schedules that all
use the same output file name – without any %D %T macro’s to make each of them
a unique filename.So, as these run, they stomp on
each other so to speak as they each take turns updating the one file in your
Reports directory. In the GUI this manifests itself as “you only get 1 file no
matter which schedule you are using.” That is, all three have file contents,
but it is only correct for one of them. The REST API tries to solve that by
only associating the file that does exist with the extraction that it goes
with. It’s not a perfect algorithm, but as long as we do not have this
configuration it works pretty well.That is, in a way, the answer
you got back was perfectly correct, the file data for [some extractions] is
no longer available even though the record of it happening is available.If the client sets up each schedule to use unique output filenames, this
artifact should go away and they always get the file contents they are
expecting.0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 690 Datastream
- 1.4K DSS
- 629 Eikon COM
- 5.2K Eikon Data APIs
- 11 Electronic Trading
- 1 Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 255 ETA
- 559 WebSocket API
- 39 FX Venues
- 15 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 25 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 279 Open PermID
- 45 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 23 RDMS
- 2K Refinitiv Data Platform
- 716 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 27 DACS Station
- 121 Open DACS
- 1.1K RFA
- 106 UPA
- 194 TREP Infrastructure
- 229 TRKD
- 918 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 95 Workspace SDK
- 11 Element Framework
- 5 Grid
- 19 World-Check Data File
- 1 Yield Book Analytics
- 48 中文论坛