Upgrade from Eikon -> Workspace. Learn about programming differences.

For a deeper look into our Eikon Data API, look into:

Overview |  Quickstart |  Documentation |  Downloads |  Tutorials |  Articles

question

Upvotes
Accepted
1 0 0 0

Downloading data via EIKON unstable

Hello,

test_stock_peers_and_relative_valuation2.txt

I am reaching out to request your expertise and advice regarding an issue we are encountering with downloading factor data via the Eikon API. In our daily fund operations, we need to download 20 specific factors for approximately 6000 US stocks. This process is essential and occurs daily. Unfortunately, our current methodology, as detailed in the attached code (.txt file), requires upwards of 400 seconds to complete on its most efficient days. More concerning is that it frequently encounters issues, particularly with certain factors, where it fails to complete the download process at all (error messages attached). This issue persists across several servers we have tested, leading to significant operational delays. Notably, we observe that retrieving estimate factors is particularly prone to these interruptions. Given these challenges, I suspect that our approach may be outdated or not aligned with best practices. It would be immensely helpful if you could review our attached code and provide any recommendations or insights on how we might enhance our data retrieval process to be more efficient and reliable. Your guidance on this matter would be greatly appreciated, as it would significantly impact our operational efficiency and reliability.


the list of factors:


attached: (1) a list of factors


TR.RelValPENTMTR.RelValEVEBITDANTMTR.RelValEVSalesNTMTR.RelValPriceCashFlowNTMTR.RelValPriceBookNTMTR.RelValDividendYieldNTMTR.RelValPETTMTR.RelValEVEBITDATTMTR.RelValEVSalesTTMTR.RelValPriceCashFlowTTMTR.RelValPriceBookTTMTR.RelValDividendYieldTTMTR.PricePctChg1DTR.TotalReturn1MoTR.FwdPtoEPSSmartEst(Period=NTM)TR.PtoREVSmartEst(Period=NTM)TR.PtoCPSSmartEst(Period=NTM)TR.EVtoREVSmartEst(Period=NTM)TR.EVtoEBITDASmartEst(Period=NTM)TR.EPSSmartEstQtrtoYearAgoQtrGrowthTR.ROESmartEst(Period=NTM)TR.F.TotDebtPctofTotEq


(2) Error message


1708830441744.png


eikon-data-api#technologyapiDownload
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

@operations06

Hi,

Thank you for your participation in the forum.

Is the reply below satisfactory in answering your question?

If yes please click the 'Accept' text next to the reply. This will guide all community members who have a similar question.

Otherwise please post again offering further insight into your question.

Thanks,

AHS

Please be informed that a reply has been verified as correct in answering the question, and has been marked as such.

Thanks,


AHS


1 Answer

· Write an Answer
Upvotes
Accepted
10.2k 18 6 9

@operations06 Thanks for your question and sorry to hear about your issue. I briefly had a look at your code and I would recommend using our latest Data Libraries instead of the older Eikon API package.

pip install refinitiv-data

after you have installed the package:

import refinitiv.data as rd
import pandas as pd
rd.open_session()

You mentioned you had a universe of 6000 RICs and around 20 fields so thats around 120,000 datapoints. The per API call limit is 10,000. So you need to chunk and iterate. First let me build a large universe:

riclist1 = rd.get_data(universe=['0#.SPX','0#.RUT',,'0#.STOXX'],fields='TR.RIC') 
riclist1

next let me define a chunking function to break down my large ric list into chunks of n rics:

def chunks(l, n):
    for i in range(0,len(l),n):
        yield l[i:i+n]
rics = list(chunks(riclist1['RIC'].to_list(),50))
cur_date = '2024-01-03'
data1 = pd.DataFrame()


for chunk in rics:
    try:
        df1 = rd.get_data(universe=chunk,fields=['TR.RevenueMean(SDate=' + cur_date + ',Period=FY1).calcdate','TR.RevenueMean(SDate=' + cur_date + ',Period=FY1)','TR.F.MktCap(SDate=' + cur_date + ',Period=FY0).calcdate','TR.F.MktCap(SDate=' + cur_date + ',Period=FY0)'])
        if len(data1):
            data1 = pd.concat([data1,df1])
        else:
            data1 = df1
    except:
        print(str(chunk) + "failed")
        pass
data1.reset_index(drop=False,inplace=True)
data1

1708970042214.png

You should try to avoid the busiest periods with larger payloads. Keep things smaller to increase the probability of returns without having to represent the API call on fail. Also the throttling for requests is 5 per second maximum. You can find out more details about limits and throttling here. Let us know if this is useful.

I hope this can help.



1708970042214.png (143.3 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.