Financial models have always been impacted by the lack of data (or many highly noisy data), by necessary mathematical simplifications such as normality or linearity assumptions and by a limited ability to use a wide set of features to better describe the problem and have greater predictive power. This is true in risk management, trading and portfolio construction, but even more so in liquidity models.
Liquidity is a multi-dimensional beast that economists, quants and statisticians have tried to understand for several decades. This type of problem has a very high dimensionality and highly non-linear patterns and very sparse data.
The nature of the problem lends itself to be faced with machine learning techniques.
We have therefore decided over the years to test some of these techniques in the calibration of models for liquidity.
In this research, leveraging GPUs and cloud, we focused on the estimation of market liquidity, in particular of the transaction cost.
In this research, we tested random forests and neural networks for the estimation of tradable volumes showing a significant increase in the out-of-sample performances. We are now extending the experiment to the entire transaction cost and not to a single component of it by testing deep learning and in particular deep reinforcement learning.
In the application of these more advanced and complex techniques, we are paying particular attention to the ongoing research on the interpretability (XAI), which is a necessary condition and not yet completely resolved for extensive use of Deep Learning in finance.
Liquidity Risk Mangement → http://bit.ly/2WQBYtk
Next ’19 ML & AI Sessions here → https://bit.ly/Next19MLandAI
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions
Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform
Speaker(s): Stefano Pasquali
Session ID: MLAI232
Publisher: Google Cloud
You can watch this video also at the source.