How to aggregate Amount Outstanding based on Issue Rating for all US Municipal Bonds using python?

Hello Team, client needs a python script that would aggregate the Amount Outstanding based on Issue Rating for all US Municipal Bonds using python.


We have this script but it is limited to 10,000 rows and the total result is 992,151.


import refinitiv.data as rd

rd.open_session()


rd.discovery.search(

view = rd.discovery.Views.MUNICIPAL_INSTRUMENTS,

top = 10000,

filter = "(IsActive eq true and SearchAllCategoryv2 eq 'US Municipal Bonds')",

select = "LongName,Ticker,CouponRate,MaturityDate,CUSIP,RIC,MuniStateDescription,CouponClassDescription,EJVAssetID,BusinessEntity,PI,SearchAllCategoryv3,SearchAllCategoryv2,SearchAllCategory,IssueDate,BankQualified,BondInsuranceDescription,IsCallable,IsPutable,IsSinkable,FaceOutstanding"

)


After getting all the results, she would like to aggregate the Amount Outstanding per Issue Rating. She is looking to achieve

1) US Muni bond with Moody's rating Aaa = X amount outstanding

US Muni bond with Moody's rating Aa = Y amount outstanding

US Muni bond with Moody's rating Baa = Z amount outstanding ........ and for all Moody's ratings


2) All US Muni Bonds = DDD amount outstanding (doesn't matter if it was rated by Moody's or is missing any rating )


Best Answer

  • Jirapongse
    Jirapongse ✭✭✭✭✭
    Answer ✓

    You can use the Search metadata to list all available fields in the MUNICIPAL_INSTRUMENTS view.

    response = search.metadata.Definition(
        view = search.Views.MUNICIPAL_INSTRUMENTS  # Required parameter
    ).get_data()


    response.data.df

Answers

  • Jirapongse
    Jirapongse ✭✭✭✭✭

    @dianne.palmario

    Thank you for reaching out to us.

    The search API limits the results to 10,000 entries so it is not effective to get all 992,151 results.

    I am not sure if CFS on RDP can provide all US Municipal Bonds. Please check with the product support team.

  • Thank you for your response. I understand that there is a limit of 10,000 entries per pull. However, my question is specifically about adjusting the Python code to split 992,151 entries into 100 pulls, each containing 10,000 entries.

    For example:

    1. Is it possible to modify the function where "top = 10000" is set, so that it covers a range, such as 10,001 to 20,000 for one pull, 20,001 to 30,000 for the next, and so on?

    2. Alternatively, is there a way to adjust the "filter" function to pull based on a specific range of row numbers?

    3. Or, could a loop be implemented to achieve this result?


  • Jirapongse
    Jirapongse ✭✭✭✭✭

    @iatamanchuk

    You can use Navigators that provide the ability to categorize and summarize the results. Then, you can use the values returned within the buckets to use in your filters.

    For more information, please refer to the Building Search into your Application Workflow article.

  • Hi @dianne.palmario, I would advise using the LDL following this tutorial.

  • Thanks! I was able to filter by MaturityDate and iterate through the data in a loop. Now, I’m working on adding more variables to the query, including the available credit ratings. Where can I find the field names for credit ratings in the MUNICIPAL_INSTRUMENTS database? I attempted using CodeCrator, but the field names that work for individual U.S. municipal bonds don’t seem to work when I use them in my query >>

    df = rd.discovery.search( view=rd.discovery.Views.MUNICIPAL_INSTRUMENTS, top=10000, filter=base_filter + date_filter, select="LongName,Ticker,CouponRate,MaturityDate,CUSIP,RIC,MuniStateDescription,CouponClassDescription,EJVAssetID,BusinessEntity,PI,SearchAllCategoryv3,SearchAllCategoryv2,SearchAllCategory,IssueDate,BankQualified,BondInsuranceDescription,IsCallable,IsPutable,IsSinkable,FaceOutstanding,RATING")