I am requesting 35 fields for 6600 stocks and the request is taking an hour to complete, is this expected or do I have an issue somewhere? I have broken the request into chunks of 50 stocks but this doesn't seem to help.
Hi @whardwick ,
I am not sure if you have shared with me your full code, but with the code below which I updated a bit from yours I get the results for 5000 rics in about 6 minutes:
chunks = [ticker_list[i:i + 2000] for i in range(0, len(ticker_list), 2000)]max_retries = 3merged_df = pd.DataFrame()for chunk in chunks: retries = 0 while retries < max_retries: try: df, err = ek.get_data(chunk, fields) df.replace('', np.nan, inplace=True) merged_df = pd.concat([merged_df, df], ignore_index=True) except: retries +=1 print(retries) continue break
Please try with this and let me know how it goes. Please note that you may increase the chunk size even more and it will run quicker, however it may through bad requests. In any case the 2000 seemed stable for me.
Best regard,
Haykaz
I would say that is expected considering the number of stocks you are making the request for. Would you mind sharing your code here, so I have a look to see if there is a way to optimize it?
Best regards,
Hi Haykaz, thanks for the response:# breaks tickers into chunks chunks = [ticker_list[i:i + 200] for i in range(0, len(ticker_list), 200)] max_retries = 3 for chunk in chunks: retries = 0 while retries < max_retries: try: df, err = ek.get_data(chunk, fields) df.replace('', np.nan, inplace=True) merged_df = pd.concat([merged_df, df], ignore_index=True)
Thanks, Haykaz. Perhaps it's the fields that are the issue. They exceed the character limit so have uploaded. fields.txt