I followed the suggestion of Alex Putkov (https://community.developers.refinitiv.com/questions/57533/1-get-timeseries-cuts-off-historic-data-2-get-data.html) to split up the 3000 requests into chunks of 50 at a time using the code in "R" (not Python) below.
j=1
RIC.long = NULL
chunk = 30
for(i in c(seq(chunk,length(RIC), by=chunk),length(RIC))){
RIC.list = vector("list", length(RIC[j:i]))
RIC.list[1:length(RIC[j:i])] = RIC[j:i]
RIC.long = rbind(RIC.long,
get_data(RIC.list,
list( "TR.PriceClose.Date","TR.PriceClose"),
list("Frq"="D","SDate"=startDate,"EDate"="2020-04-09")))
j=i+1
}
Problem:
- It runs fine within 1 minute for startDate = "2020-04-01" - so a short history OK
- for startDate = "2005-01-01" in chunks of 30, it takes 33min (granted I am working from home with download = 14 Mbits/s and upload = 8 Mbits/s)
Question: Is this normal? On my Bloomberg R API it would have been a single request in under 1 min. I also tried to download it in a forloop RIC by RIC which takes about 60min. Chunk size of 50, throughs errors.
Thank you for possible insights, if this is the Eikon normal or if I am doing something wrong.