question

Upvotes
Accepted
36 4 7 14

Search API: how to structure the credit rating data like in the GOVSRCH interface?

If I add a credit rating filter to my query in GOVSRCH, I get a nicely formatted table with a nested column containing e.g. Moody's Long-term Issuer Rating. See screenshot.

If I run the same query with the API, I get several columns that look like they are related to the credit rating filter, but contain only the raw data. See second screenshot.

I want to structure this data into a nicely formatted nested column structure in the same way as it is done in the GOVSRCH interface. How can I do this?



GOVSRCH.jpegcodebook.jpeg

#technologysearchcredit-ratings
govsrch.jpeg (234.7 KiB)
codebook.jpeg (205.4 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
7.4k 18 2 8

Hi @tr105 ,


I believe, we need to further transform the output we got from the API with Pandas toolings and that is what I guess is done under the hood by GOVSEARCH.

I wrote a quick code which does that but that should be carefully reviewed and tested before using since it is out of the scope of API support and might not be the perfect solution. In any case, I believe this should be the way to go (I mean df restructuring) as the API wouldn't output the way it is outputted in GOVSEARCH. Here is my solution:


import refinitiv.data as rd
import pandas as pd
rd.open_session ()


ratings = rd.discovery.search(
        view =rd.discovery.Views.GOV_CORP_INSTRUMENTS,
        top = 10,
        select = 'DocumentTitle, RIC, RatingsScope'
)

ratings  = ratings.explode('RatingsScope')
ratings = pd.concat([ratings.drop(['RatingsScope'], axis=1), ratings['RatingsScope'].apply(pd.Series)], axis=1)

agencies  = ratings['Agency'].unique()

new_df = pd.DataFrame()

for agency in agencies:
    columns = pd.MultiIndex.from_tuples([("DocumentTitle" , ""),
                                    ("RIC", ""), ("Agency", "")] + [(agency, col) for col in ratings.columns[3:]])
    df_transformed = pd.DataFrame(ratings , columns = columns)
    for item in range(len(columns)):
        df_transformed.loc[: , columns[item]] = ratings.iloc[: , item]
    new_df = pd.concat([new_df, df_transformed], axis =1)
    new_df = new_df.loc[:, ~new_df.columns.duplicated()]
new_df = new_df.drop('Agency', axis=1, level=0)
    
new_df.head()


screenshot-2023-06-22-at-185515.png

Please note that in my output the name of multyindex column for eg 'Moody's Long-term Issuer Rating' is MDY. You may create a dict to map name with codes and read it from the dict in the loop. I didn't do that since I don't know the name making. In case you are not aware of that as well, you may raise content quiry via the MyRefinitiv.


Again, this may not be the best solution to transform the original dataframe, but I think that's the way of getting an output similar to the GOVSEARCH one.


Hope this helps.


Best regards,

Haykaz



icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
7.4k 18 2 8

Hi @tr105,


Thank you for your question. I am not aware of a direct way of solving the nested challenge from the API, however, may provide a workaround using Pandas Explode function.

Since I don't have your complete code I will demo in a simpler example which you can extend to your use case:

import refinitiv.data as rd
rd.open_session ()
df = rd.discovery.search(
        view =rd.discovery.Views.GOV_CORP_INSTRUMENTS,
        top = 10,
        select = 'DocumentTitle, RIC, RatingsScope'
)
df

screenshot-2023-06-20-at-115552.png

Below I transform the df with Explode function and apply series to RatingsScope components:

df  = df.explode('RatingsScope')
df = pd.concat([df.drop(['RatingsScope'], axis=1), df['RatingsScope'].apply(pd.Series)], axis=1)
df

screenshot-2023-06-20-at-115659.png

I am not sure if this is exactly what you were looking for, but I am hopeful it will help to take this further.


Best regards,

Haykaz


icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @haykaz.aramyan , I couldn't add my response in a reply to your message due to some UI bug, so I added it below instead.
Upvotes
36 4 7 14

Hi Haykaz, thanks for your clear answer.

It does not answer my question, though. Maybe I wasn't clear.

The point is that in your example the data is still not structured in the way that GOVSRCH does it (see screenshot below).

In the API response, the RatingsScope field contains the below array of objects for each bond. Note that there can be more entries in this array if you add more RatingsScope filters to the query. Under the hood, GOVSRCH restructures this array into the columns that you can see below in the screenshot.

Note that the RatingsScope data returned by the API does not contain any string like 'Moody's Long-term Issuer Rating'. GOVSRCH seems to conjure those from thin air. We want to replicate that.

Note that we are NOT asking how to create nested columns in Pandas dataframes, we don't care about that.

        "RatingsScope": [
          {
            "Agency": "MDY",
            "CurrentRating": "Baa1",
            "CurrentRatingDate": "2018-08-09T00:00:00.000Z",
            "CurrentRatingRank": 8,
            "Preliminary": "n",
            "RatingScope": "F",
            "RatingType": "MIS",
            "Solicited": "y"
          },
          {
            "Agency": "FTC",
            "CurrentRating": "WD",
            "CurrentRatingDate": "2018-01-11T00:00:00.000Z",
            "CurrentRatingRank": 98,
            "Preliminary": "n",
            "RatingScope": "F",
            "RatingType": "FDL",
            "Solicited": "y"
          }
        ],



screenshot-2023-06-22-at-144118.jpeg




icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Write an Answer

Hint: Notify or tag a user in this post by typing @username.

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.