question

Upvotes
Accepted
1 0 1 2

Certain Refinitiv Data requests work individually, but not concurrently (Python)

Hi everyone,

I have an issue regarding concurrency with functions using get_data(). My workflow is as such where I have multiple requests of get_data() and get_history() in different asyncrhonous functions that I am executing concurrently using Python's `asyncio` library. Now, the issue is, when these functions are executed individually (one at a time in their respective Jupyter Notebook cells), the respective data I want in get_data() gets retrieved. But it is when I run everything concurrently, where I face errors where some data cannot be retrieved. There is an RDError that gets thrown, and it'd look something like this:

RD.ERROR : Data cannot be retrieved for Instrument ['AAPL.O'] fields ['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']

This is especially true for Income Statement, Earnings, Return, and Price Related Data.

I have taken some steps to troubleshoot this issue:
1. I have increased the asyncio.sleep() time, after every rd.get_data() / rd.get_history() invocation. I have been unsuccessful doing that
2. Double Checked the Individual notations for each of the datapoints that I am trying to pull (eg. TR.F.ComShrOutsTot / TR.PricePctChgYTD) in the desktop platform, to see if there was actually data behind them, and there was, so I can rule this out as a potential source of the error.
3. Tested the functions indiviudally, and there seems to be no issue with my syntax.

4. Tested my `asyncio` code (in which I am gathering a list of my function invocations as tasks, and awaiting their results while running concurrently), on another basic use case, which worked, so I can rule out the factor that my code is written wrongly.

This brings me to think that the reason for this issue is that, either Refinitiv's server is not able to handle all my requests I am making concurrently, or I have something wrong with my asynchronous code. I am using the `desktop.workspace` connection. I need some more light shed, and assistance on this issue, and any kind of response is welcome.

Thanks!

python#technologyrdp-apirefinitiv-data-platformdatardpjupyter-notebook
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvote
Accepted
7k 18 2 8

Hi @vishal.nanwani ,


Have you tried leveraging parallel requests from RD Libraries? There is a GitHub example for Historical pricing here. Below I have transformed the example into fundamental_and_reference call which you might need for your request:

import refinitiv.data as rd
from refinitiv.data.content import fundamental_and_reference
import asyncio
rd.open_session()
tasks = asyncio.gather(
    fundamental_and_reference.Definition(universe='VOD.L', fields=['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']).get_data_async(closure='Vodafone'),
    fundamental_and_reference.Definition(universe ='AAPL.O', fields= ['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']).get_data_async(closure='Apple'),
    fundamental_and_reference.Definition(universe = 'MSFT.O', fields= ['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']).get_data_async(closure='Microsoft')
)
await tasks

def display_reponse(response):
    print(response)
    print("\nReponse received for", response.closure)
    if response.is_success:
        display(response.data.df)
    else:
        print(response.http_status)

vodafone, apple, microsoft = tasks.result()

display_reponse(vodafone)
display_reponse(apple)
display_reponse(microsoft)


screenshot-2024-07-18-at-144446.png


Hope this helps.


Best regards,

Haykaz


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
1 0 1 2

Hi @aramyan.h ,


Thank you for your reply! I will try this out now, I haven't tried this before, though I have seen this in the GitHub documentation, but forgot about it. Regarding your answer, I have a few things to clarify:


1. Where can I find all the refinitiv.content templates from which I can get data? I cannot seem to find it.

2. Is this like a search template, I see the similarities because search templates work in a similar way where there is a "library" of specific data where one can grab data from (asking out of curiosity, not using search templates right now).

3. How is this different from doing asyncio for get_data() calls, with the fields passed in?

4. I have already written my processing code for the dataframes that get returned with get_data(). Will the same names of datapoints be returned by using this, such that I can then seamlessly swap out the get_data() calls? (FYI : my get_data / get_history calls as an example:


 generation_information = rd.get_data(
            universe=[f'{ticker}'],  fields = ['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']);
income_stmt = rd.get_history(
    universe=[f'{ticker}'], 
    fields=["TR.F.TotRevenue", "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax",'TR.F.ComShrOutsTot'], 
    interval="1Y",
    start="2019-01-01", 
    end="2024-10-01"
    );    

I will implement your suggestion, and update this thread, should there be problems.


Thx!

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
7k 18 2 8

Hi @vishal.nanwani ,


Thanks for the follow up!

Let em answer your questions below:

Question 1 & 2

Refinitiv Data Libraries has three layers. The get_data, get_history functions are coming from the Access layer, which offers simplified ease of use functions to access the date. The Content layer provides value-add capabilities to manage and access the content within the interface. And finally the Delivery Layer is the lowest layer offering the most flexibility where you can can directly ping the RDP endpoints and uses some enhances parameters, however is a bit complex in terms of the interface. More about the layers you can read from the Reference guide. The complete Examples for the Content layer can be found in our GitHub repo.

Question 3

As much as I know there won't be differences as such, however If you want to handle the asynchronous calls yourself you need to be mindful on the limitations which you need to handle, such as the the number of simultaneous request, throtling etc, which are taken care for you in the get_data_async method I have shared above.

Question 4

Yes, the outputs, including the field names and the content should be the same. Let me know if you experience any differences and we can look into it.


Best regards,

Haykaz

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @aramyan.h ,

Thank you for your reply! I tried out the first set of asynchronous code, and I am a bit stuck at something. I looked through the documentation, and I couldn't find any indication of a function equivalent to get_history, which is asynchronous, and can be used for libraries like Fundamental and Reference, Estimates, etc. As an example, some of my data calls look like this:

income_stmt = rd.get_history(
    universe=[f'{ticker}'], 
    fields=["TR.F.TotRevenue", "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax",'TR.F.ComShrOutsTot'], 
    interval="1Y",
    start="2019-01-01", 
    end="2024-10-01"
    );

So, as a result, I can't really apply the get_data_async() function for this data, as if I try to transfer the above syntax over, I get an error like this:

TypeError: Definition.__init__() got an unexpected keyword argument 'interval'

This occurs when I am trying to get the data over intervals. I used the `fundamental_and_ reference` library in the Content Layer to try and do this, following your example above. Please guide as to how exactly I can get historical data for `fundamental_and_ reference` types of libraries within the Content layer, which don't have historical retrieval functions already there (like the Historical Pricing library in the Content layer).


Thanks!

Upvotes
7k 18 2 8

Hi @vishal.nanwani ,


If you want to request fundamental_and_reference data for an interval you should leverage the parameters param and provide the dates and the frequency for your request (this is similar how you would do for get_data), see below:

tasks = asyncio.gather(
   fundamental_and_reference.Definition(universe='VOD.L', fields=["TR.F.TotRevenue", "TR.F.TotRevenue.date", "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax","TR.F.ComShrOutsTot"],  parameters ={"SDate": "2019-01-01", "Edate": "2024-10-01", "FRQ":"FY"}).get_data_async(closure='Vodafone'),
    fundamental_and_reference.Definition(universe ='AAPL.O', fields= ["TR.F.TotRevenue", "TR.F.TotRevenue.date",  "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax","TR.F.ComShrOutsTot"], parameters ={"SDate": "2019-01-01", "Edate": "2024-10-01", "FRQ":"FY"}).get_data_async(closure='Apple'),
   fundamental_and_reference.Definition(universe = 'MSFT.O', fields= ["TR.F.TotRevenue","TR.F.TotRevenue.date",  "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax","TR.F.ComShrOutsTot"],  parameters ={"SDate": "2019-01-01", "Edate": "2024-10-01", "FRQ":"FY"}).get_data_async(closure='Microsoft')
)

screenshot-2024-07-22-at-101258.png

Best regards,

Haykaz


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @aramyan.h ,

So I have tested your example above, and so far, it has yielded some positive results! I can get some of the data points I was trying to access pretty quickly. Now, the next thing I want to ask, is that for Balance Sheet related data, when I used get_data() , I would get the dates alongside the values, so was the case for my Profitability , and 5 Year Quarterly Revenue related information as well. But when I use get_data_async() , I am unable to get the date alongside the values. As an example, you can refer to the file attached. The top image shows what is returned with get_data_async() , the bottom shows what is returned with get_data(), both of which are for my Balance Sheet data.

Now, I would like the date to be returned alongside the values I am retrieving, I am unsure how to do this. It would be for the following values:

Balance Sheet: "TR.F.CashCashEquivTot","TR.F.TotCurrAssets","TR.F.PPENetTot","TR.F.TotAssets","TR.F.DebtLTTot",'TR.F.TotLiab','TR.CurrentLiabilitiesActValue'

Quarterly Revenue: TR.Revenue (over yearly and quarterly intervals)

Other Values: "TR.F.ReturnAvgTotAssetsPctTTM",'TR.F.ReturnAvgComEqPctTTM', "TR.EBITDAActValue","TR.EBITActValue","TR.ROAActValue","TR.ROEActValue","TR.ROCEActValue","TR.F.AssetTurnover",'TR.F.IncAfterTaxMargPct','TR.RevenueActValue','TR.F.TotAssets','TR.F.LTDebtPctofTotEq,'TR.F.TotDebtPctofTotEq'

Right now, it only comes with the default index (0,1....).

Appreciate your help!




balancesheet_sync_async_dataframe.png

Hi @vishal.nanwani ,


Happy to here, that it works and you are happy with the results. For the date to appear, you need to add an additional field with '.data' in the end, e.g 'TR.Revenue.date'. See below and example:

fundamental_and_reference.Definition(universe='VOD.L', fields=["TR.F.TotRevenue", "TR.F.TotRevenue.date", "TR.F.OpExpnTot","TR.F.OpIncPerShr","TR.F.IncBefTax","TR.F.NetIncAfterTax","TR.F.ComShrOutsTot"],  parameters ={"SDate": "2019-01-01", "Edate": "2024-10-01", "FRQ":"FY"}).get_data_async(closure='Vodafone')


Best regards,

Haykaz

Upvotes
1 0 1 2

Hi @aramyan.h ,


I have faced another problem that I am a little unsure how to solve. The preface is, I have more than 10 functions that are running asynchronously, and each function has an average of 1-8 get_data and get_data_async calls running. Now, this would take some time to retrieve all the data, and it causes the ReadTimeout error to come up a lot. So as an error handling mechanism, I decided to catch the RDError, and then use the “half retrieved” data that I stored in a variable, which I use in my try block, and it looks like so:

try:
      generation_information_response = await content.fundamental_and_reference.Definition(universe=f'{ticker}', fields=['TR.ISIN','TR.FiExchange', 'TR.TRBCIndustry', 'TR.PCFullTimeEmployee', 'TR.CompanyMarketCapitalization', 'TR.BusinessSummary']).get_data_async(closure='Apple'),
      ......................................................
        return formatted_generation_information;
    except rd.errors.RDError as re:
        if not generation_information.empty:
            print(f"The error occured here is: {re}");
            ... processing done and assigned to another variable ...
            return accumulated_generation_information;      
    except Exception as e:
        print(f"An error occured {e}");
        if not generation_information.empty:
            ... processing done and assigned to another variable ...
            return accumulated_generation_information;

This ensures that I have the half retrieved data, or whatever data has been retrieved so far, since there isn’t time to retrieve everything. Please guide on whether this is a good error handling approach, and whether something better should be done, (since this is quite specific to the get_data_async() function I am asking for assistance).


Another problem I am facing, is the amount of time that a Read is supposed to go for. Right now, I get read errors shorter than the time I have put in for a http timeout. I have set this in refinitiv-data.config.json, to 120 seconds, and even added the line: `rd.get_config()["http.request-timeout"] = 120`, when I run my connection code. I would need help to make the Reads last longer, because it would reduce the amount of things to do to handle the errors.


Thanks!

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @vishal.nanwani ,


I would say the code should be structured in a way so it awaits the responses from the request so you won't have half-retrieved data. I am not sure about the technical details of your code, however if you share a piece of code which can help to replicate the issue, I will try to help.


Best regards,

Haykaz

Upvotes
1 0 1 2

Hi @aramyan.h ,

A code sample that is reflective of this looks as such (variable names and some data hidden):

async def base_stock_statistics(ticker, datapoints):
    a_response = rd.get_data(
            universe=[f'{ticker}'],
            fields = datapoints
        );
    return a_response;

async def generate_information(ticker):
    try:
        print("Before Retrieving Base Stock Statistics");
        data_points = ['HST_CLOSE','TR.OPENPRICE','BID','ASK',',...(12 other data points),'TR.TPEstValue'];
        a = await base_stock_statistics(ticker,data_points)
        print("Before Retrieving Day High/Low Pricing");
# 3 TR. Data Points
        b = await content.fundamental_and_reference.Definition(universe=f'{ticker}', fields=['TR.PriceOpen', 'TR.PriceHigh','TR.PriceLow']).get_data_async(closure='Apple'),
       
        
        print("Before Retrieving ____");
# 2 TR. Data Points
        c = await content.fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price52WeekLowDate','TR.Price52WeekHighDate']).get_data_async(closure='Apple'),
       
        
        print("Before XXX Information");
# 5 TR. Data Points
        d = await content.fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.PricePctChgYTD',....4 other data points]).get_data_async(closure='Apple'),
      
        
        print("Before XYZ Information");
        e = await content.fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.F.ComShrOutsTot',...4 TR Datapoints]).get_data_async(closure='Apple'),
       
        
        print("Before ABC Information");
        f = await content.fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price50DayAverage','TR.Price200DayAverage']).get_data_async(closure='Apple'
        
        # Await tasks concurrently
        # Comment out if fails
        a_response, b_response,c_response,d_response,e_response,f_response = await asyncio.gather(
            a,b,c,d,e,f
        );

        a_response = a_response.iloc[0];
        print("After Retrieving Base Stock Statistics");
        a_response_na_columns = a_response.columns[stock_statistics.isna().any()].tolist();
        print(f"A Columns that are not null: {a_response_na_columns }");
        
        b_response = b_responses[0].data.df;
        print("After Retrieving Day High/Low Pricing");
        b_response_columns = b_responses.columns[b_response .isna().any()].tolist();
        print(f"B Columns that are not null: {b_response_columns}");
        
        c_response = c_response[0].data.df;
        print("After Retrieving 52 Week High/Low Dates");
        c_response_na_columns = c_response.columns[c_response.isna().any()].tolist();
        print(f"C Columns that are not null: {c_response_na_columns}");
        
        d_response = d_response[0].data.df;
        print("After Price Change and Return Information");
        d_response_na_columns = d_response.columns[d_response.isna().any()].tolist();
        print(f"D Columns that are not null: {d_response}");
        
        e_response = e_response [0].data.df;
        print("After Share and Short Interest Information");
        e_response_na_columns = e_response.columns[e_response.isna().any()].tolist();
        print(f"E Response Columns that are not null: {e_response_na_columns}");
        
        f_response = f_response[0].data.df;
        print(type(f_response ));
        display(f_response);
        print("After Moving Average Information");
        f_response_na_columns = f_response.columns[f_response.isna().any()].tolist();
        print(f"F Response Columns that are not null: {f_response_na_columns}");
        
    except rd.errors.RDError as re:
        print(f"The error occured here is: {re}");
        # if not ratios.empty:
        #     print("Dataframe is not empty");  
    except Exception as e:
        print(f"The error occured here is: {e}");
    finally:    
       # Processing Code for the data that is retrieved

Here, I have encased all of the retrievals inside a try block. This is because often times, there would be ReadTimeouts, while connecting to the Refinitiv Data Platform. Therefore, I have tried to catch whatever might be problematic inside this try block. A problem with this, is that sometimes, the logic doesn't complete (because it is asynchronous), and jumps to the `fianlly` block. Do you have any idea how I can manage this concurrency of Refinitiv Data Retrieval better? I am quite lost as to how I can work around / solve this issue. I appreciate your help!

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
7k 18 2 8

Hi @vishal.nanwani ,


I am not sure how your whole workflow is, but just several observations based on what you shared:

1. a = await base_stock_statistics(ticker,data_points) would return error and your code will go to finally as it is not a couratine and you are passing it under gather

2. I don't think the try/except here will help for RD errors as you will see those errors when you will resolve the task.


I have made some changes to your code and also added a small piece where you can check the response status and collect unsuccessful closures for re-run if necessary. please see below.

from refinitiv.data.content import fundamental_and_reference
import asyncio

async def base_stock_statistics(ticker, datapoints):
    a_response = fundamental_and_reference.Definition(universe=f'{ticker}', fields=datapoints).get_data_async(closure='a')
    return a_response
 
async def generate_information(ticker):
    print("Before Retrieving Base Stock Statistics")
    data_points = ['HST_CLOSE','TR.OPENPRICE','BID','ASK','TR.TPEstValue']
    a = await base_stock_statistics(ticker,data_points)
    print("Before Retrieving Day High/Low Pricing")
# 3 TR. Data Points
    b =  fundamental_and_reference.Definition(universe=f'{ticker}', fields=['TR.PriceOpen', 'TR.PriceHigh','TR.PriceLow']).get_data_async(closure='b')
    print("Before Retrieving ____")
# 2 TR. Data Points
    c =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price52WeekLowDate','TR.Price52WeekHighDate']).get_data_async(closure='c')
    
    print("Before XXX Information")
# 5 TR. Data Points
    d =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.PricePctChgYTD']).get_data_async(closure='d')
    
    print("Before XYZ Information")
    e =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.F.ShrOsTot']).get_data_async(closure='e')
    print("Before ABC Information")
    f =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price50DayAverage','TR.Price200DayAverage']).get_data_async(closure='f')
    # Await tasks concurrently
    # Comment out if fails
    tasks = asyncio.gather(
        a, b, c, d, e, f
    )
    await tasks
    responses = tasks.result()
    unsuc_responses = []
    for response in responses:
        if response.is_success:
            print(response.data.df)
        else:
            unsuc_responses.append(response)
    for res in unsuc_responses:
        print(res.closure, res.errors)

await generate_information('AAPL.O')


I made field name for closure e deliberately incorrect, just to generate some error. Again, I am not sure of your workflow, but this is what worked for me.

screenshot-2024-08-16-at-115344.png

Hope this helps to make the necessary adaptations to your code

Best regards,

Haykaz


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @aramyan.h ,


Thank you for your reply!! Just before doing this, I have another query. Say you have a set of asynchronous functions (encapsulating the get_data_async() call) that you have set, and another asynchronous function, which encases / encapsulates a get_data() call, would something similar to what you have described occur? This is a separate example, showing what I am saying:

async def g(ticker):
 a = rd.get_data(
            universe=f"{ticker}",
            fields=["TR.____"]
        );
# OR get_history() 

# a - l are async functions
tasks = [
        a(ticker),
        b(ticker),
        c(ticker),
        d(ticker),
        # Synchronous
        e(ticker),
        f(ticker),
        # Synchronous
        g(ticker),
       h(ticker),
        # Synchronous
        i(ticker),
        j(ticker),
        k(ticker),
        l(ticker),
    ];
    results = asyncio.gather(*tasks);
    
    a_response, b_response, c_response,d_response, e_response,f_response, g_response,h_response,i_response,j_response,k_response,l_response= await results;


Hi @vishal.nanwani ,


I don't think you can have get_data_async() and async encapsulated get_data() under asynchio gather. That is what you had actually initially and I was getting error in asynchio gather for the get_data one.

Hi @aramyan.h ,


Ok noted. Will try your method out.

Thanks!

Upvotes
1 0 1 2

Hi @aramyan.h ,


I have successfully converted all my data retrieval functions to asynchronous functions. Now, I am faced with a few of these errors that I have caught :

[Error(code=429, message="too many requests for /data/datagrid/beta1/ [POST] Requested universes: ['T.N']. Requested fields: ['TR.PriceOpen', 'TR.PriceHigh', 'TR.PriceLow']")]
 [Error(code=429, message="too many requests for /data/datagrid/beta1/ [POST] Requested universes: ['T.N'].  ['TR.Price52WeekLowDate', 'TR.Price52WeekHighDate']")]
[Error(code=429, message="too many requests for /data/datagrid/beta1/ [POST] Requested universes: ['T.N']. Requested fields: ['TR.PricePctChgYTD', 'TR.RelPricePctChgYTD', 'TR.TotalReturnYTD', 'TR.PricePctChg1D', 'TR.PriceNetChg1D']")]
 [Error(code=429, message="too many requests for /data/datagrid/beta1/ [POST] Requested universes: ['T.N']. Requested fields: ['TR.Price50DayAverage', 'TR.Price200DayAverage']")]

Now, this is happening in the function that I have written out above. I only send 1 request at a time, I don't send multiple requests for the same datapoints. Is there any way where we can prevent this error, I'm not quite sure how to troubleshoot this particular problem

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @vishal.nanwani ,


You are perhaps hitting the 10000 request daily limit, you can check the limits here.

Upvotes
1 0 1 2

Hi @aramyan.h ,

(IN Response to your comment of too many requests) To solve this, would you reccommend using :

await asyncio.sleep(1);

after each request, such that the code would look like:


a = await base_stock_statistics(ticker,data_points)
    print("Before Retrieving Day High/Low Pricing")
await asyncio.sleep(1);

# 3 TR. Data Points
    b =  fundamental_and_reference.Definition(universe=f'{ticker}', fields=['TR.PriceOpen', 'TR.PriceHigh','TR.PriceLow']).get_data_async(closure='b')
await asyncio.sleep(1);

    print("Before Retrieving __")
# 2 TR. Data Points
    c =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price52WeekLowDate','TR.Price52WeekHighDate']).get_data_async(closure='c')
    await asyncio.sleep(1);
    print("Before XXX Information")
# 5 TR. Data Points
    d =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.PricePctChgYTD']).get_data_async(closure='d')
    await asyncio.sleep(1);
    print("Before XYZ Information")
    e =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.F.ShrOsTot']).get_data_async(closure='e')
    await asyncio.sleep(1);
    print("Before ABC Information")
    f =  fundamental_and_reference.Definition(universe=f'{ticker}',fields=['TR.Price50DayAverage','TR.Price200DayAverage']).get_data_async(closure='f')
    await asyncio.sleep(1);
    # Await tasks concurrently
    # Comment out if fails
    tasks = asyncio.gather(
        a, b, c, d, e, f
    )

What do you think of this approach?


Thanks!

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

well, if you are hitting max requests per second, that will help, but if you are hitting the daily 10K, then it won't

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.