Battle Of The Portfolio Optimization Methods - YouTube

Channel: CloseToAlgoTrading

[5]
Hi Guys, my name is Denis and you are watching Close To algo trading.
[10]
today I'm gonna talk about portfolio optimization.
[13]
What do you think, does portfolio optimization really help us beat the market?
[20]
Stay with me and maybe we can answer this question.
[25]
Before we start, perhaps we need to know what Portfolio optimization means.
[30]
Portfolio optimization is the process of selecting proportions of various assets to include in
[36]
a portfolio, in such a way as to make the portfolio better than any other according
[41]
to specific constraints.
[42]
For my experiment I will try to maximize Sharpe Ratio.
[47]
Sharpe ratio is the measure of risk-adjusted return of a financial portfolio.
[52]
We will calculate it in the following way: Sharpe = (Portfolio Return) / (Portfolio Std)
[61]
because in our case risk free rate doesn't play any role and it can be 0.
[67]
Well, we need to allocate our money to different assets in the way that our portfolio has maximum
[74]
sharpe value.
[75]
The good news is that there are a lot of methods that already exist which can solve this task
[83]
for us.
[91]
Now that we know what we're going to optimize, let's move on to our participance.
[96]
Today, the following methods will take part in our battle:
[101]
- Classic Mean-variance optimization - Hierarchical Risk Parity created by Marcus
[107]
Lopez de Prado - The Critical Line Algorithm, which was specially
[112]
designed for portfolio optimization.
[115]
- Efficient Frontier with nonconvex optimizer
[118]
And we also have two more exotic methods LSTM Model for Sharpe Value optimization
[124]
Trained LSTM Model for prediction future allocation.
[129]
- The first one I found on the internet.
[132]
Guys adopt deep learning models to directly optimise the portfolio Sharpe ratio.
[138]
They claimed that this method showing the best performance over the testing period,
[142]
from 2011 to the end of April 2020, including the financial instabilities of the first quarter
[150]
of 2020 compared to other methods.
[154]
The implementation of this model you can find in the risk_model.py or in authors' github
[161]
Let's call this method LSTM, just because they use a simple LSTM network.
[168]
- The second method is different from all
[170]
others, I trained a simple network that predicted allocation based on the model from the first
[177]
method.
[180]
First what I want to know is which of these methods works better.
[184]
For that I chose to use a 125 days price period for all methods.
[190]
Set of assets was selected randomly, and it consisted of 10 stocks.
[196]
Then I randomly collected 10 periods of 125 prices from 2008 to 2020 and tried to optimize
[207]
the portfolio using the different methods.
[211]
The result was interesting, as you may see in most periods the best sharpe value have
[216]
MeanVariance and CLA method, and LSTM method shows very good results, in some cases it
[225]
outperforms others.
[226]
One very strange result shows other methods, they are very similar to random allocation.
[233]
For the first test i used only stocks, let do the same test but reduce our assets to
[239]
four etfs.
[242]
We can see that MeanVariance and CLA are the leaders and LSTM shows mixed results but still
[250]
looks pretty good.
[252]
What we can say, looking at these results.
[255]
Well, CLA and MeanVarians look very good, but will they perform so well if we will use
[263]
this allocation for our portfolio for future?
[266]
Let's check it.
[270]
For the tests, I decided to use backtrader and I created a very simple only long strategy.
[277]
I used 125 days of historical price to calculate allocation of assets and based on this
[285]
allocation i rebalance the portfolio.
[287]
Rebalancing was done every 22 days.
[290]
Once in a month.
[292]
There is one difference between the test of classic models and LSTM.
[298]
For each of classic model i did two tests, in first test i used normal calculation of
[303]
asset allocation based on 125 days prices, and in the second
[309]
test i collected historical price and the calculation always performed on the collected
[315]
data.
[316]
For LSTM, I didn't use collected data, and I always used a new model for calculation,
[324]
because if i tried to reuse the model, the allocation didn't change.
[329]
Also, we have a model that should predict future allocation.
[335]
For the stocks I collected data on 1000 training data elements from 2000 to 2006.
[342]
One training element consist of the 125 days and the allocation
[348]
for the next 22 days.
[350]
Next, a simple network was trained and used for the testing.
[356]
Same was done for ETFs test.
[361]
The test statistics for 2008-2010 period you can see on the table, and the sharp value
[367]
below in the graph.
[369]
Result is interesting, most of the methods show
[372]
results not far away from random, and some of them perform very poor.
[377]
However, our trained model outperforms all others methods.
[383]
Let's check another period that is not so close to our training data.
[387]
For 2011 - 2017 years the winner is HRP method with data collection,
[395]
but we can really see that all results are very close and not far away from the random
[400]
allocation.
[401]
Also I checked the period from 2018 to 2021, so the LSTM model does not work very well,
[409]
the trained model is a bit better.
[415]
CLA and MeanVarians are the winner.
[419]
In addition, I did a test for a specific period from 2012 to 2016, because if you check the
[426]
return graph, you can find that our trained model and also LSTM model perform very poorly.
[432]
From 2012 to 2013 our model loses money, but the spy is growing.
[440]
It is also visible here.
[443]
Maybe it is because I took the training for true allocation from the LSTM model.
[448]
But in this special period we can see that the random allocation outperforms other methods.
[454]
The final test was done on ETFs.
[457]
So, trained models a little outperforms other methods.
[472]
Based on these results, we can clearly see that the past allocation doesn't provide a
[476]
better allocation in the future and in some cases it is worse as random.
[483]
Model that try to predict future allocation look much more promising, but to get stable
[489]
results we have to add more data and more features.
[492]
However, at the moment it is quite difficult to rely on any of these methods.
[499]
Well.
[500]
that is all for today, I hope it was interesting and see ya in the next video.
[505]
Bye.