Posting this query on behalf of a client.
Dear Support Team,
We recently migrated our application from the Eikon .NET APIs to the LSEG Data Library for Python. However, we have observed that the performance of data retrieval operations, particularly when using get_data(), is slower compared to the previous implementation with the Eikon .NET APIs.
Our use case involves retrieving data for up to about 850 instruments with 3 column. While we understand that the Python library may have different performance characteristics, we would like to know:
1. Performance of `get_data()`:
- Is `get_data()` the most optimized function for retrieving real-time and historical data in bulk?
- Are there any best practices or configurations (e.g., batching, filtering) to optimize the performance of get_data() when retrieving data for our case?
For your reference, our current implementation is as follows.
```
ret = d.get_data(
universe=instruments,
fields=cols
)
```
2. Alternative Functions:
- Does the `lseg.data` library provide any alternative functions that are faster or more efficient than `get_data()` for our use cases?
3. Comparison with `refinitiv.data`:
- I noticed that Refinitiv also provides the `refinitiv.data` library. Is this library faster or more optimized than `lseg.data` for similar use cases?
- Are there specific scenarios where `refinitiv.data` is recommended over `lseg.data`?
4. Documentation or Examples:
- Could you provide any documentation, examples, or recommendations for optimizing data retrieval using `lseg.data` or `refinitiv.data`?
I would appreciate your guidance on these questions to ensure that I am using the most efficient approach for my application.
Thank you for your support.