I have a Python application that runs for 30 minutes. Within the program, every 30 seconds, I am snapping values for a set of ~40 RICs. So, every 30 seconds, I do this:
rdp_session = rdp.open_platform_session( RT_APP_KEY, rdp.GrantPassword( username=RT_RDP_LOGIN, password=RT_RDP_PASSWORD ) ) snap = rdp.get_snapshot(universe = RIC_UNIVERSE, fields = ['BID','ASK'] ) rdp_session.close()
While it generally works, sometimes it throws random authorization errors. This is what the error looks like on the RDP side:
2021-10-13 06:02:50,409 ERROR [ajp-apr-8280-exec-226] OpenAMRestProxy authenticateModule [GE-6180] - class org.springframework.web.client.ResourceAccessException org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://localhost/openam/json/authenticate": Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:666) 2021-10-13 06:02:50,927 ERROR [ajp-apr-8280-exec-226] OpenAMRestProxy authenticateModule [GE-6180] - HttpClientErrorException: org.springframework.web.client.HttpClientErrorException: 400 Bad Request 2021-10-13 06:02:51,446 ERROR [ajp-apr-8280-exec-226] AuthApiController handleOpenAmRestException [GE-6180] - OpenAmRestException code: 401, message: null, errorCode: 107, errorMessage: Authentication Failed. com.tr.aaa.as.exception.OpenAmRestException 2021-10-13 06:02:51,446 INFO [ajp-apr-8280-exec-226] HistoricalErrorLogger logHistoricalError [GE-6180] - GrantType::null, Id::GE-A-01103867-3-6180, SessionId::null, AppId::f25d914da4b849d2be537c891b103c3b83724f26, RefreshToken::null, DeviceId::null, Exception::(name=class com.tr.aaa.as.exception.OpenAmRestException, message=null), HTTP Status::401(Unauthorized), errorCode::107, errorMessage::Authentication Failed.
I am told that I should be doing a refresh instead of the process above, and that I'm "flooding" the system (one call every 30 seconds? Really?). In any event, can someone provide a code snippet that would reflect the preferred best practice? Keeping in mind that the program needs to be 100% stable, and that random authorization errors are not even slightly acceptable.
Would it be possible to know what threshold I'm violating that's causing an issue? And would it be possible for the authorization server to correctly state the actual issue, rather than just serving back a generic authorization failure or bad request message? When a credential works, then 30 seconds later, it (apparently randomly) doesn't work, saying it's an "authorization failure" is not helpful.
I am not a Python expert, but one thing I noticed with the RDP Library is if the wait loop /sleep is not properly implemented, the library may not be function correctly in terms of callbacks and authentication. I was previously using sleep() but was advised by the RDP Lib team to use something like the following:
while (time.time() < exit_time): # The following line ensures the async event callback mechanism works asyncio.get_event_loop().run_until_complete(asyncio.sleep(1))
Also, can you confirm which version of the RDP library you are using - the latest version is
pip install refinitiv-dataplatform==1.0.0a10
and several issues have been fixed over the past few months.
The code snippet you provided above opens a session, performs a snapshot, then closes a session. I want to confirm if this is what you are doing every 30 seconds? If so, what I would suggest you do is:
The difference here is that you don't need to open and close the session every 30 seconds - you just need to do that once during the life of the application.
Alternatively, you don't really need to even do this. That is, you don't need to perform a snapshot every 30 seconds but instead set up your request as streaming. So during the life of the application, you can capture streaming updates within a callback that ensures the data you have is fresh and up-to-date. For example:
Thanks for the response. To confirm: every 30 seconds, I open the session, snap, then close (then I add the rows to my collection data frame, then wait for the next snap).
I had initially set this up as a streaming application, but had authentication issues there as well, where, after some period of time successfully capturing streaming events, authentication would spontaneously fail. I switched to this approach because I'm fine with a snap every 30 seconds (it's possible for the instruments I'm looking at there may not be an update during the 30 minute window), but most importantly, I wanted to avoid, at all costs, these random authentication errors. I thought I'd be safe by opening and closing, and simply having separate sessions, but apparently not.
My biggest issue is that I don't know why the authentication server is failing. To me, it looks utterly random. I can't set up a test case that reliably triggers the error because sometimes it runs with no problem. And the error message is of no help. I'm apparently violating some policy, but no one will tell me what it is.
If I move the open and close session calls outside of my loop (one open and close for the 30 minute window), I feel like I'm risking a random error if/when it tries to reauthenticate during that period...which is what happened with the streaming connection.
What would be the minimal amount of change possible to make the current approach work (separate snaps, not streaming), not "flood" the system with my 60 snaps during the 30 minute window, AND guarantee that the authentication server won't take a sudden unexpected dislike to me?