Is it possible to generate multiple copies of rfa.log, one per process, rather than one per machine?
If you refer to the RFA_ConfigGuide.pdf for the fileLoggerFilename parameter, it mentions the ability to include things like Process ID in the filename.
e.g. \Logger\AppLogger\fileLoggerFilename="rfa{p}.log"
will generate a log filename named something like rfa1234.log
I have just tested briefly here with RFAC++7.4 on Windows and ran two concurrent instances of the same app and two seperate rfaxxx.log files were generated in the current working directory
Best Answer
-
It's been a while since I've used the RFA Market Data Interfaces (the name is really a misnomer, isn't it?) but first thing you'll have to make sure you have the request throttling that RFA applies itself under control so that it doesn't interfere with your results. Configuration variables to look into are throttle*. Have you considered those?
Secondly I know the ADS can be tuned to give more attention to serving new requests as opposed to serving existing requests with updates. The default settings in ADS favours servicing existing subscriptions ... which makes a lot of sense.
Lastly your request performance will always depend more than anything else on whether the ADS already has the item in its cache or if it has to go the ultimate source to get it. You can really only test request performance if you warmup and lock-in all your items in the ADS's cache before your begin your test of request performance.
If it is just a matter of avoiding timeouts then simply tune the timeout parameters in RFA and tune things like OpenWindow (or is it OpenLimit??) on the ADS and the throttleMaxCount in RFA. If - on the other hand - you really want to increase the request throughput performance (which of course will also give you less timeouts) then you have to do a more thorough analysis.
But coming back to your question: Is there something that can be done on the event queue model in RFA? Sure you can make your application multi-threaded and fire your requests on multiple connections concurrently. But I bet your immediate problem is elsewhere so this may not be worth the trouble.
If you upgrade your application to be a RSSL consumer then you'll see increased request performance just from that move. Also RSSL has things like batch requests which SSL doesn't have.
I hope this gave you some pointers.
0
Categories
- All Categories
- 3 Polls
- 6 AHS
- 36 Alpha
- 166 App Studio
- 6 Block Chain
- 4 Bot Platform
- 18 Connected Risk APIs
- 47 Data Fusion
- 34 Data Model Discovery
- 687 Datastream
- 1.4K DSS
- 622 Eikon COM
- 5.2K Eikon Data APIs
- 11 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- 3 Trading API
- 2.9K Elektron
- 1.4K EMA
- 255 ETA
- 557 WebSocket API
- 38 FX Venues
- 14 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 23 Messenger Bot
- 3 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 60 Open Calais
- 276 Open PermID
- 44 Entity Search
- 2 Org ID
- 1 PAM
- PAM - Logging
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 22 RDMS
- 1.9K Refinitiv Data Platform
- 680 Refinitiv Data Platform Libraries
- 4 LSEG Due Diligence
- LSEG Due Diligence Portal API
- 4 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.2K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 12 World-Check Customer Risk Screener
- 1K World-Check One
- 46 World-Check One Zero Footprint
- 45 Side by Side Integration API
- 2 Test Space
- 3 Thomson One Smart
- 10 TR Knowledge Graph
- 151 Transactions
- 143 REDI API
- 1.8K TREP APIs
- 4 CAT
- 27 DACS Station
- 121 Open DACS
- 1.1K RFA
- 105 UPA
- 194 TREP Infrastructure
- 229 TRKD
- 918 TRTH
- 5 Velocity Analytics
- 9 Wealth Management Web Services
- 91 Workspace SDK
- 11 Element Framework
- 5 Grid
- 18 World-Check Data File
- 1 Yield Book Analytics
- 48 中文论坛