How To Connect to News With Python

Aleniles
Aleniles Explorer
edited March 27 in Refinitiv Data Platform

Hello,

I am trying to connect to RDP news directly in python without having the terminal in the same machine.

import lseg.data as ld

import os

os.environ["LD_LIB_CONFIG_PATH"] = r"C:\\Users\\padadmin\\PycharmProjects\\pythonProject\\Repos\\pwm-pad\\APIs\\Configuration"

ld.open_session()

Where I have the json file like this:

{
"sessions": {
"platform": {
"ldpv2auth": {
"app-key": "MYAPIK",
"client_id": "GESG1-xxxxx",
"client_secret": "psw"
}
}
}
}

A couple of questions:

a) Is that json format correct ? If not how to fix it so that I can have access to the news WITHOUT having the terminal running behind ?
b) How can I find the psw if I do not remember it anymore ?

Thanks

Best Answer

  • Jirapongse
    Jirapongse ✭✭✭✭✭
    Answer ✓

    @Aleniles

    According to the available scopes, you don't have permission to access news. Please contact your LSEG Representative or Account Manager to check your permission.

    Regarding the Filings, the code looks like this:

    import os
    os.environ["LD_LIB_CONFIG_PATH"] = "../../Configuration"
    
    import lseg.data as ld
    from lseg.data.content import filings
    
    ld.open_session("platform.ldpv2")
    
    query = '{  FinancialFiling(filter: {AND: [{FilingDocument: {DocumentSummary: {FeedName: {EQ: "Edgar"}}}}, {FilingDocument: {DocumentSummary: {FormType: {EQ: "10-Q"}}}}, {FilingDocument: {DocumentSummary: {FilingDate: {BETWN: {FROM: "2020-01-01T00:00:00Z", TO: "2020-12-31T00:00:00Z"}}}}}]}, sort: {FilingDocument: {DocumentSummary: {FilingDate: DESC}}}, limit: 10) {    _metadata {      totalCount    }    FilingDocument {      Identifiers {        Dcn      }      DocId      FinancialFilingId      DocumentSummary {        DocumentTitle        FeedName        FormType        HighLevelCategory        MidLevelCategory        FilingDate        SecAccessionNumber        SizeInBytes          }  FilesMetaData {        FileName        MimeType      }    }  }}'
    definition = filings.search.Definition(query=query)
    response = definition.get_data()
    print(response.data.df)
    
    #Download a file
    response.data.files[0].download(path="download")
    

    For more information regarding to the Filings API, please refer to the following pages:

Answers

  • Hello @Aleniles

    Yes, the format is correct. The format for the https://developers.lseg.com/en/api-catalog/lseg-data-platform/lseg-data-library-for-python Platform Session configuration is as follows:

    {
    "logs": {
    "level": "debug",
    "transports": {
    "console": {
    "enabled": false
    },
    "file": {
    "enabled": false,
    "name": "lseg-data-lib.log"
    }
    }
    },
    "sessions": {
    "default": "ldpv2",
    "platform": {
    "ldpv1": {
    "app-key": "App-Key",
    "username": "V1 Machine-ID",
    "password": "Password"
    },
    "ldpv2":{
    "client_id": "V2 Client-ID",
    "client_secret": "Client Secret",
    "app-key": ""
    }
    }

    }
    }

    Note:

    Then you can find more resources about getting News via the Data Library:

    About the password, please contact your LSEG Representative or Account Manager to help you with the password resetting.

  • Aleniles
    Aleniles Explorer

    Top, so in my case is V1.

    I managed to get the <lseg.data.session.Definition object at 0x214fa2d8710 {name='ldpv1'}> message although this doesn't mean that I am connected successfully, right? How could I check that I am connected successfully ?

    When I run this:

    ld.news.get_headlines("LSEG.L", start="20.08.2024", end=timedelta(days=-4), count=3)

    I am getting this:

    No user scope for key=/data/news/v1/headlines, method=GET.

    ScopeError: Insufficient scope for key=/data/news/v1/headlines, method=GET.
    Required scopes: {'trapi.data.news.read'}
    Available scopes: {'trapi.data.symbology.advanced.read', 'trapi.search.explore.read', 'trapi.cfs.claimcheck.read', 'trapi.data.filings.metadata', 'trapi.auth.cloud-credentials', 'trapi.metadata.read', 'trapi.search.metadata.read', 'trapi.data.filings.retrieval', 'trapi.graphql.subscriber.access', 'trapi.data.filings.search', 'trapi.data.symbology.read'}
    Missing scopes: {'trapi.data.news.read'}

    What method should I use then based on the available one ?



  • Aleniles
    Aleniles Explorer

    Basically seems that these are in scope:

    Available scopes: {'trapi.data.symbology.advanced.read', 'trapi.search.explore.read', 'trapi.cfs.claimcheck.read', 'trapi.data.filings.metadata', 'trapi.auth.cloud-credentials', 'trapi.metadata.read', 'trapi.search.metadata.read', 'trapi.data.filings.retrieval', 'trapi.graphql.subscriber.access', 'trapi.data.filings.search', 'trapi.data.symbology.read'}

    Maybe it works with filings ? I haven't seen any example here https://github.com/LSEG-API-Samples/Example.DataLibrary.Python/blob/lseg-data-examples/Examples/1-Access/EX-1.01.05-News.ipynb

    Are you able to provide a snippet that I could use for Filings that is in my scope ?

  • Hello @Aleniles

    The <lseg.data.session.Definition object at 0x214fa2d8710 {name='ldpv1'}> message indicates that you have open the session succesfully.

    About the "Missing scopes: {'trapi.data.news.read'}" message, it is a permission issue. Please follow my colleague @Jirapongse suggestion below.

  • Aleniles
    Aleniles Explorer
    edited April 4

    I am trying to filter by specific OrganizationalIDs

    I tried adding:

    filter: {
    AND: [
    {FilingDocument: {DocumentSummary: {FeedName: {EQ: "Edgar"}}}},
    {FilingDocument: {DocumentSummary: {FormType: {EQ: "10-K"}}}},
    {FilingDocument: {DocumentSummary: {FilingDate: {BETWN: {FROM: "2023-12-01T00:00:00Z", TO: "2025-03-31T00:00:00Z"}}}}},
    {FilingDocument: {Identifiers: {OrganizationId: {IN: ["4295914405", "4295905573", "4295904620"]}}}}]
    },

    And it seems to filter correctly.

    However I am not able to download all documents for all the ORG IDs it works only for one (NVDA). Could you provide maybe a snippet that works?

  • Jirapongse
    Jirapongse ✭✭✭✭✭

    @Aleniles

    response.data.files is an array. You can use an index to access each file.

    To download all files, you can try this one.

    response.data.files.download(path="download")