For a deeper look into our Eikon Data API, look into:

Overview |  Quickstart |  Documentation |  Downloads |  Tutorials |  Articles

question

Upvotes
Accepted
1 0 1 1

How to find a web URL for news using Eikon story id or URN

Hello,

I am trying to find a way if I can get Web URL from a URN or story ID that is available inside EIKON.
For example there was news dated today 18 May 2021 with headline

"Euro zone inadvertently supported zombie firms, ECB finds"

The above news headline has a story id/urn :"urn:newsml:reuters.com:20210518:nlXXXXXXXX:3" & similarly the news is available on its webpage with link as
https://www.reuters.com/article/us-ecb-policy-zombies-idUSKCN2CZ0QP

In the above case what is the way through EIKON API I can get the relevant URL, so that the same news can be read via webpage. Please assist.

eikoneikon-data-apiworkspaceworkspace-data-apirefinitiv-dataplatform-eikonnewssearch
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
Accepted
7.4k 10 6 8

@tushar.shetty1 So as far as I am aware there is no such systematic link between our storyId's and reuters.com links. Also please be aware that not all of the Reuters News is available on the free to use public site - so I'm not sure how that would work in any case.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
7.4k 10 6 8

@vippsworld News story items usually contain some links to the news article source. We can use the BeautifulSoup package to extract any href links for the news story and store those as a column in our original headlines frame. Check the following:

from bs4 import BeautifulSoup
df = ek.get_news_headlines('zombie companies',count=10)
df

So now we have our headlines frame we can request the story and extract any links we have:

df['Links']=""
for idx, story in enumerate(df['storyId']):
    soup = BeautifulSoup(ek.get_news_story(story))
    links=[]
    for a in soup.find_all('a', href=True):
        links.append(a['href'])
    df['Links'][idx] = links

df

its important to note not all stories will have source references and other links may be present - but you can do more complex link filtering.

I hope this can help.


1621332456079.png (378.6 KiB)
1621335246318.png (457.0 KiB)
icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Upvotes
1 0 0 0

@jason.ramchandani: Thanks for that elaborate solution. However @vippsworld is mostly interested in the Reuters Stories for which there seem to be no links available. So is there a way to use what we have ( news code / Pnac ] to get to the Reuters url for them.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.