I download data in loops. The performance is not an issue (I have also a short break between the individual series).
The problem is: the download interrupts at different series ( at 3rd, at 10th, at 13th etc) , such that the problem is not reproducible. It seems a matter of performance of the data delivery systems.
My solution now: run the code 10 times, until it works. If not - wait some hours and repeat.... If not: try it on Saturdays, when the systems are less busy (except last Saturday, where it didn't work until... Sunday afternoon...)
It is time consuming and expensive.
Can you help somehow, upgrade your system, give us a hint hot to proceed...
May be a possibility to switch the data center - in my case it is always EMEA1, also when I connect through vpn with an USA - ip.
The way I read your post is that you have a data retrieval job that you run with some periodicity. This data retrieval job consists of multiple data requests. If any of these requests fails, you rerun the entire job again at a later time, right? If I interpret you correctly, then rather than rerunning the full job at a later time you can resubmit the failed data request in the same run. E.g. the following code snippet resends failed request twice before giving up.
for i in range(3): try: df, e = ek.get_data(instruments = instrument_list, fields = field_list) break except ek.EikonError as err: if err.code in [400, 408, 500]: pass
There are numerous reasons why a data request could fail, and it's not abnormal for one request out of many to occasionally fail. If this is indeed what you're experiencing, then the solution is to add rather simple defensive code (like in the above snippet) to your data retrieval job. If however the issue you experience is different, e.g. if you have a large number of requests failing or if you experience times when you're unable to retrieve any data at all, then we need the specifics to look into it. The best way to go about it would be to open a case with Refinitiv Helpdesk. Before you open the case it would be great if you could collect examples of failed requests. Best if you could get a Fiddler capture of those requests. If you cannot get a Fiddler capture, then at least the examples of the specific API calls that failed, the exact date and time when the call failed and the trace of the error returned would be required.