3 0 0 2

Requesting index constituents' prices in python

Hey there,

I am using the Datastream API in Python and I am tyring to retrieve historical end-of-month prices for all constituents of the Euro Stoxx 50 index in the selected period. The problem here is that the constituents have changed over the years so my initial thought was to create a dictionary first that delivers the constituents for every month in the selected period and after that making another datastream request based on the listed constituents in each month. Unfortunately, I've reached my data limit for this month because I ran my code multiple times and it seems like my code was quite inefficient retrieving the names and prices for 50 constituents every single month.

My code is the following:

import pandas as pd
import PyDSWS as dsws
import time
from datetime import date
from datetime import timedelta
from dateutil.rrule import rrule, MONTHLY
import calendar
ds = dsws.Datastream(username="xxxx",password="yyyy")
start_date = date(2018, 1, 31)
end_date = date(2023, 1, 31)
date_range = pd.date_range(start=start_date, end = end_date, freq='M').strftime('%Y-%m-%d')
data_dict = {}
for date in date_range:
    data = ds.get_data('LDJES50I0118', fields = ['ISIN'],date=date)
    data_dict[date] = data
data_dict_list = {k:v.values.tolist() for k,v, in data_dict.items()}
prices_dict = {}

for date, securities in data_dict_list.items():
    prices = ds.get_data(securities, fields = ['P'], start = date, end = date)
    prices_dict[date] = prices
df = pd.concat(prices_dict.values(), axis=0)

I know there have been similar questions on this platform, but I couldn't find a case where someone kept the changing constituents in mind. Does anyone have an idea of how to make this code more efficient so that I don't reach my datalimit after a couple of requests? Maybe by storing the requested results or anything similar?

Thank you in advance for your help!

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

1 Answer

3.2k 19 2 3

Hi @roman.lengeling ,
I reached out to a specialist in-house to make sure, but to the best of my knowledge, your method is the only one that'd work. I'd advise using the `|L` nomenclature as shown here - to get constituents though - have you tried that? You could also get the lists on DFO and bring them into Python afterward, as shown here, giving you a higher request limit in effect.

icon clock
10 |1500

Up to 2 attachments (including images) can be used with a maximum of 512.0 KiB each and 1.0 MiB total.

Hi @jonathan.legrand ,

thank you for your help. I haven't tried both of your recommmendations yet. I will try them first as soon as my account is unlocked again :)