I am doing POC of python Library with our first client .
Client requirement is pretty straight forward . They are subscribing around 9000 RIC & interested in snapshot data only with 2 fields (i.e. Dynamic View request for Image )
I tried 4 different models (for Delivery & Contents Layer functions in Sync/Async mode ) with 1000 Rics for POC . All are getting taking around 2 mins to process that much responses .
One of a Async non-streaming sample code is as such :-
def create_cache(name,msg ):
print( " Response Recevied for ",msg['Key']['Name'] )
ricList = getItemList() ## here geting 1000 rics for POC
mySession = sh.get_session('deployed')
display_stats( " Record processing started ") ## Start time for calculation
for ric in ricList:
order_book = rdp.ItemStream(
name = ric,
service = 'CANNED_DATA',
session = mySession,
domain = "MarketPrice",
fields = ['BID', 'ASK'],
on_refresh = lambda name , msg : create_cache(name,msg), ## At the moment just printing
on_update = lambda s,msg : print (" Update Msg ", s,json.dumps(msg, indent=2))
loop = asyncio.get_event_loop()
display_stats(" process "+ str( len(outPut))+ " Records") ## End time for processing calc
I am wondering if there is something i can do to improve performance , expecting to process ~9000 Rics in 10 seconds
Typically, the WebSocket API supports the batch request. With the batch request, you can use one request message to request multiple items. This could improve the performance of the application.
However, from my checking, the RDP Python library may not support the batch request.
I think to get more performance you can directly use WebSocket API to consume the data. The example is available on GitHub.