Summary of financial technical indicators.
Open access peer-reviewed article
This Article is part of Numerical Analysis and Scientific Computing Section
Article metrics overview
93 Article Downloads
View Full Metrics
Article Type: Research Paper
Date of acceptance: August 2024
Date of publication: September 2024
DoI: 10.5772/acrt.20230095
copyright: ©2024 The Author(s), Licensee IntechOpen, License: CC BY 4.0
The future of portfolio management is evolving from relying on human expertise to incorporating artificial intelligence techniques. Traditional techniques such as fundamental and technical analysis will eventually be replaced by more sophisticated deep reinforcement learning (DRL) algorithms. However, it is still a long way from designing a profitable strategy in the complex and dynamic stock market. While previous studies have focused on the American stock market, this paper applies two DRL algorithms, the proximal policy optimization (PPO) and the advantage actor–critic (A2C), to trade the constituent stocks of the Australian Securities Exchange 50 (ASX50) Index. This paper also incorporates a weighted moving average into the action space and introduces a transaction threshold to help agents minimize trivial trades that lead to high transaction costs. The results are presented and benchmarked against the ASX50 Index. The A2C agent was better at following trends and had the higher upside potential but can suffer from more severe damage during bearish markets. On the other hand, the PPO agent had the lowest annual volatility and the highest maximum drawdown, which is more helpful in a bearish or volatile market.
deep reinforcement learning
portfolio optimization
proximal policy optimization
advantage actor–critic
Author information
Portfolio management is a time-honored topic that has been widely discussed and studied in the modern financial world. Traditionally, portfolio management has relied on expert human knowledge on financial markets, asset valuation, and risk management, which was primarily the domain of wealthy individuals and institutions.
However, the advent of online brokerages and social media greatly lowered the barrier of entry into the world of investment. Online brokerages are offering low commission cost and benefits to new investors. Additionally, sharing of trading strategies and successes on social media has brought a lot of attention to the stock market. On top of that, the perfect storm of pandemic-induced boredom and disposable income from reduced spending further drives the average person’s interest to an all-time high. Brokerages are reporting record new retail investors and trading volume [1], with assets worth billions of dollars being traded daily.
Even though the stock market is more accessible to the average person now, it still remains complex and deciding on what stocks to invest still remain an intimidating question. The common techniques to help investors with their decisions include fundamental analysis, which measures the intrinsic value of a stock using macro- and microeconomic factors; technical analysis, which evaluates a stock using statistical trends related to price and volume; and algorithmic trading [2] with machine learning techniques.
With remarkable progress in the artificial intelligence field, it is no surprise that about 75% of the trading volume on the United States stock market is made through algorithmic trading [2]. The main challenges to algorithmic trading are extracting information from noisy stock data and designing a suitable trading strategy [3]. Many studies have attempted to forecast the stock market by leveraging the use of sophisticated deep learning methods for their strong feature extraction ability [4–6]. However, such models lack decision-making capabilities [7]. These models focus on stock price forecasting. The trading strategies are then developed based on pre-defined rules and instructions. Stock prices are influenced by many unpredictable factors, from economic policies to natural disasters etc. [8], which will lower the effectiveness of such static trading strategies. Further, Zulfiquar
Most classical reinforcement learning (RL) algorithms do not consider the exogenous nature and noise of financial time series data, which may lead to treacherous trading decisions. To address this issue, Yue
Therefore, deep reinforcement learning, a combination of deep learning and reinforcement learning, has emerged as a promising approach to portfolio management. The adoption of deep learning techniques helps the RL algorithms scale and manage big and noisy data. In recent years, there has been a growth in the studies applying DRL to portfolio management [10–12]. Nevertheless, these studies are not without some limitations.
First, the American stock market is heavily studied, especially the constituent stocks in the Dow Jones Industrial Average. This has limited the scope of the studies. Second, historical stock data between 2000 and 2020 was commonly used to train and test the model. The stock market has undergone major transformations over the past decade and is expected to continue evolving. The DRL models have a strong tendency to overfit [12]; thus, using data that are too old may be detrimental to the model.
It will be more interesting to study the application of DRL on the non-American stock market within the past 5 years. Therefore, this paper studied the daily stock data of the constituent stocks in the Australian Securities Exchange 50 (ASX50) Index from 2017 to 2022 and applied two DRL algorithms, namely advantage actor–critic (A2C) and proximal policy optimization (PPO), to manage the portfolio. This paper introduces limitations on the agent’s action space through a weighted moving average and a transaction threshold to mitigate risks and enforce specific strategies that align with prior knowledge. However, this approach restricts autonomy of the agent and its ability to explore novel strategy organically. Thus, this paper explores how such interventions would influence the adaptability of the DRL algorithms.
The remainder of this paper is organized as follows. Section 2 reviews the related works of this study. Section 3 defines the portfolio management problem. Section 4 describes the methodology of the study and the necessary backgrounds. Section 5 presents the results of the models, which are validated through analysis. Finally, Section 6 concludes the paper and discusses interesting leads as future works.
Studies related to stock trading often involves large sums of money. Private firms are secretive about their study results [12]. As such, state-of-the-art models and promising results are not publicly available. Nevertheless, there is still study accessible on the topic of portfolio management. This section discusses the conventional approach to portfolio management and the recent works in the DRL approach to the problem.
The volatility and uncertainty in the stock market will introduce risk to any investment in a stock portfolio. There is no single solution to portfolio management. Modern portfolio theory or mean-variance analysis [13], is one of the prominent methods used for portfolio management. The model assumes investors are risk-averse and rational, aiming to diversify the portfolio by minimizing the variance given a certain level of expected return [14]. The portfolio with the minimal variance is identified: this is called the minimum-variance portfolio. However, the minimum-variance portfolio does not account for market fluctuations and sees the market from a very long-term view. Future asset volatility is difficult to ascertain in practice [15]. This results in a considerable amount of estimation error and unstable composition of the portfolio.
Another well-known method for portfolio management is momentum trading. Investors identify the inertia of a price trend to seek profit from the herding behavior of market psychology. Momentum effects can be observed in stock markets across the world and are not limited to any particular market [14]. However, momentum strategies suffer heavily from sudden changes in market volatility. Any changes in price trends will inflict significant losses for investors trading based on the momentum.
Overall, conventional methods rely on static assumptions about the market such as the market is efficient or asset returns and risks can be accurately estimated [16]. These assumptions do not accurately reflect the dynamic nature of the market. Thus, the conventional models lack adaptability and is profitable only in the very long run.
The DRL algorithms, on the other hand, are able to learn directly from raw sensory input without much domain-specific knowledge. The DRL aims to approximate optimal value or policy functions using deep neural networks and feedback signals from the environment. For example, Brini
The core idea of a critic-only or value-based method is to estimate the state–action values based on the Q-value function using a neural network. An investment decision will then be made by selecting the action with the highest predicted Q-value.
Researchers such as Chen
The limitation of the critic-only approach stems from its inherent reliance on discrete state and action space, which limits the actions that the agent can take. Adapting DQN to work with continuous action spaces in the case of portfolio management is challenging due to its reliance on discrete Q-values for each possible action.
On the other hand, an actor-only or policy-based method is where the agent learns the optimal policy directly without the need to compute and compare expected outcomes of actions by generating an optimal probability distribution over the possible actions. An investment decision will then be made by selecting the action with the highest probability.
Wang
The actor-only model is not a popular approach in portfolio management. The limitation of the actor-only approach is the tendency to converge to a local optimum, which results in high variance when evaluating the policy. Although there are techniques to reduce this variance, there is a better approach to portfolio management: the actor–critic approach.
The actor–critic framework is made up of two models: the actor network representing the policy that decides the action to take in a given state and the critic network representing the value function that evaluates the action that was taken. The actor–critic approach is able to combine the strengths of the critic-only and the actor-only model, making it the most popular approach for portfolio management.
Yang
Sadriu [25] applied A2C and DDPG to optimize a stock portfolio trading 28 constituent stocks of the OMX Stockholm Index. The paper evaluated the performance of the two models during the 2020 stock market crash and noticed that the conventional methods actually outperformed the DRL models. Although the DRL models could recover from the crash faster, this shows the inability of the DRL models to respond to extreme conditions in the stock market.
Instead of using the usual open, high, low, close price, and volume (OHLCV) and technical indicators of the stocks, Yue
Portfolio management is the process of continuous reallocation of capital to create and maintain a diversified portfolio. We modeled portfolio management process as a Markovian decision process (MDP). An MDP is defined as the tuple (
The goal of the trading agent was to find an optimal policy 𝜋∗ that determines the best course of action to take at any given state. There are three assumptions made about the portfolio management process:
The liquidity of the market is high enough that there will be zero slippage. This implies that each trade the trading agent makes will be carried out at the market price.
The capital invested by the trading agent is not significant enough to have any influence on the market.
All transactions will be made at the closing price.
Short selling is not allowed.
The ASX50 Index consists of the top 50 companies listed on the Australian Stock Exchange based on market capitalization [28]. The index is weight-adjusted by the market capital of the constituent stocks. These stocks have a high trading volume, which suggests higher market liquidity, justifying our first assumption.
We extracted the daily OHLCV data of the stocks between 2017/01/01 and 2023/01/06 from EODData. The stocks selected were from the index composition at the end of the time period. Although there have been changes to the constituent stocks in the index over the 5-year period, it is not in our interest to mimic the index.
Although there are 50 companies in the ASX50 Index, 5 companies had been listed on the exchange later than 2017 and did not have data for the whole period. We removed these stocks from the portfolio. Therefore, the final dataset consisted of 45 stocks in total (see Appendix A).
Figure 1 shows the closing price of the ASX50 Index over the past 5 years. The ASX50 Index has fluctuated wildly over the past 5 years: strong growth in 2019 followed by the COVID-19 crash in the first half of 2020. The index has recovered steadily since the crash but is experiencing some volatility in 2022.
The dataset was split into three periods as shown in Figure 2. Data from 01/01/2017 to 12/31/2019 were used for training, while data from 01/01/2020 to 12/31/2020 were used for validation. Finally, we would test our agent’s performance on data from 01/01/2021 to 06/01/2023.
Apart from the OHLCV data, we computed the financial technical indicators to help describe the market. Technical indicators reflect the stock market movements from a different perspective and reduce the impact of noise on the data [29]. The technical indicators fit into four categories: trend, momentum, volume, and volatility. Table 1 shows a summary of the technical indicators used. (See Appendix B for the details.)
Category | Financial technical indicators |
---|---|
Trend | 5-day exponential moving average (EMA) |
Trend | 20-day exponential moving average (EMA) |
Momentum | Moving average convergence divergence (MACD) |
Momentum | Relative strength index (RSI) |
Momentum | Stochastic oscillator |
Volume | On-balance volume (OBV) |
Volume | Volume price trend (VPT) |
Volume | Money flow index (MFI) |
Volatility | Bollinger Bands |
Our portfolio management environment was built using OpenAI Gym [30], an open-source Python library for developing RL algorithms. It is important to build an environment that simulates real-world trading as closely as possible so that the optimal policy learned can be applicable in the real world.
The state space of our portfolio management environment at any given time
The action space of our agent was designed to handle continuous allocation of capital across a portfolio of assets, including cash:
Action vector:
The elements of
No short selling of the stock: 0 ≤
The policy output would consist of a vector of length
In order to minimize the impact of the trades on the stock market, we gave the agent a smaller initial balance of $50,000. This is also a more reasonable amount for retail investors to start with.
Different brokerage platforms charge different amounts of commission fees. Given the small starting capital the agent has, the agent would be incurring the minimum fee for each trade. Hence, a fixed commission fee of $20 per trade was introduced, instead of a linear or quadratic cost model.
The introduction of a transaction fee should deter greedy algorithms that aim at achieving an optimal portfolio at every time
The ultimate goal of all investors would be to maximize their returns. Therefore, we set the reward function as the logarithmic rate of returns:
Saturation and inefficiency in learning can be an issue if the output is too sparse. When the reward function has a large range of values, certain actions may be assigned unnecessarily high importance, which can be disadvantageous in the long run. As such, we applied a reward scale of 0.1. This will compress the space of estimated expected returns and prevent any numerical instability [8].
We applied PPO and A2C algorithms from the Stable-Baselines3 library, which is a set of pre-built RL algorithms with the TensorFlow library [32].
The PPO is an on-policy algorithm that optimizes the policy function while ensuring that the distance between the new and old policies is not too large. This constraint helps to prevent the policy from changing too much in each iteration, which could lead to instability or suboptimal convergence.
The PPO uses a surrogate objective function to approximate the objective function. The surrogate objective function restricts the range within which the policy can change. The clipped ratio is introduced to clip the policy update to a range between 1 − 𝜀 and 1 + 𝜀, where 𝜀 is a small positive number.
The A2C is an on-policy algorithm and an extension to the baseline actor–critic algorithm. In the baseline actor–critic algorithm, there are two models: an actor network that generates the policy and a critic network that measures how good the chosen action is. However, the log probability in the objective function has very high variance as it involves taking logarithms of small probabilities.
Instead of the action value function, A2C uses an advantage function. The idea of the advantage function is to determine how much better a certain action is compared to the average action. The advantage function provides a baseline that leaves behind only the part that is attributable to the action. This helps to mitigate the high variance in the policy gradients and ensure more stable training.
We looked at five metrics to evaluate the performance of the agents. This includes compound annual growth rate (CAGR), cumulative return, annual volatility, Sharpe ratio, and maximum drawdown (MDD).
The CAGR is the measure of the annual return of an investment over time. The CAGR can be expressed as follows:
Cumulative return is the cumulative sum of the daily returns and can be expressed as follows:
The SR is the measure of risk-adjusted return, which describes how much excess return you are receiving for the volatility incurred. We defined risk-free rate to be the Australia 10-Year Government Bond in this study. The SR can be expressed as follows:
The MDD is the maximum drop in the value of the investment by finding the value between the peak and a trough before a new peak is attained.
As shown in Figure 3, the performance of both the PPO and A2C fell short of expectations with the returns constantly plunging over time and eventually hitting rock bottom. Further investigations suggest that the underachieving results were due to the high volume of daily transactions in the first quarter of 2020.
Figures 4 and 5 show the distribution and trend of number of transactions made per day by the PPO agent, respectively. The PPO agent made around 30.67 transactions, paying a transaction cost of $613.40 per day. Figures 6 and 7 show the distribution and trend of number of transactions made per day by the A2C agent, respectively. The A2C agent made around 18.27 transactions, paying a transaction cost of $365.40 per day.
We noticed that
To tackle the issue of having an overly high number of daily transactions, we introduced threshold
However, the results were still undesirable. The PPO agent continued to spiral out of control, while the A2C agent failed to beat the ASX50 market. We also noticed that the A2C agent (
We investigated the trading history of the A2C agent with the best results (
We then looked into one of the holdings the A2C agent (
Therefore, instead of simply comparing
Trades would only occur if the agent decided that the weight of a certain stock needs to change by
The results of the PPO agent improved substantially as shown in Figure 13. The agent is able to beat the index, with a few models achieving decent returns. The PPO agent (
Transaction threshold | |||||
---|---|---|---|---|---|
Window size | 0.04 | 0.045 | 0.05 | 0.055 | 0.06 |
5 | −16.76% (25.29%) | −12.23% (27.02%) | 4.56% (24.44%) | 6.45% (22.64%) | −2.79% (17.79%) |
10 | 7.83% (25.77%) | 6.17% (17.09%) | 11.05% (12.14%) | 1.71% (4.10%) | −0.39% (1.49%) |
15 | | 12.06% (9.04%) | 4.56% (3.68%) | 0.85% (1.37%) | 0.61% (0.88%) |
The results of the A2C agent improved substantially as shown in Figure 14. The agent is able to beat the index, with one model achieving remarkable returns. The A2C agent (
Transaction threshold | |||||
---|---|---|---|---|---|
Window size | 0.04 | 0.045 | 0.05 | 0.055 | 0.06 |
5 | −1.41% (25.28%) | 2.27% (24.97%) | −9.71% (21.29%) | 8.71% (18.8%) | 0.0% (0.0%) |
10 | −1.82% (24.85%) | 0.0% (0.0%) | 2.04% (23.75%) | 9.44% (17.1%) | 0.0% (0.0%) |
15 | 7.53% (17.14%) | 0.58% (21.52%) | | 5.46% (13.26%) | 1.95% (11.28%) |
Figure 15 illustrates the cumulative returns of the final PPO and A2C agents against the benchmark, ASX50 Index. The PPO agent was able to achieve a higher CAGR compared to the A2C agent. Both agents reported a positive cumulative return; however, both agents fell short of beating the index.
Table 4 shows the summary statistics of the performance metrics by the agents and the ASX50 Index. Both the PPO (4.83%) and A2C (1.32%) had a lower cumulative return than the index (8.03%). The PPO portfolio had the lowest annual volatility (9.47%) while the A2C (13.98%) and the index (13.43%) had similar annual volatility. The A2C had the highest maximum drawdown (−24.59%) while the PPO had the lowest maximum drawdown (−11.76%). This shows that the PPO portfolio takes much lower risk as compared to the A2C model and was able to perform better. However, the SR of the PPO portfolio is only 0.29. The reduced level of risk taken did not produce sufficient returns.
Portfolio | CAGR (%) | Cumulative return (%) | Annual volatility (%) | SR | MDD (%) |
---|---|---|---|---|---|
PPO | 2.34 | 4.83 | 9.47 | 0.29 | −11.76 |
A2C | 0.65 | 1.32 | 13.98 | 0.12 | −24.59 |
ASX50 | 3.85 | 8.03 | 13.43 | 0.35 | −14.59 |
However, if we examined each year of the testing period separately, we would get completely different results. Table 5 shows the summary statistics of the performance metrics by the agents and the ASX50 Index in 2021. This was a bull market, evident from the cumulative return of ASX50 (10.38%). Both the A2C (21.78%) and PPO (10.41%) portfolios had higher returns than the index. Additionally, both PPO (4.95%) and A2C (9.38%) recorded lower annual volatility than the index (11.15%). The SR was “very good” for both the PPO (2.06) and A2C (2.18) portfolios [33]. The PPO (−4.67%) portfolio was able to report the lowest maximum drawdown again.
Portfolio | CAGR (%) | Cumulative return (%) | Annual volatility (%) | Sharpe ratio | Maximum drawdown (%) |
---|---|---|---|---|---|
PPO | 10.41 | 10.41 | 4.95 | 2.06 | −4.67 |
A2C | 21.78 | 21.78 | 9.38 | 2.18 | −6.73 |
ASX50 | 10.38 | 10.38 | 11.16 | 0.95 | −6.75 |
Portfolio | CAGR (%) | Cumulative return (%) | Annual volatility (%) | Sharpe ratio | Maximum drawdown (%) |
---|---|---|---|---|---|
PPO | −4.70 | −4.70 | 12.29 | −0.31 | −11.76 |
A2C | −16.37 | −16.37 | 17.18 | −0.91 | −24.59 |
ASX50 | −2.04 | −2.14 | 15.30 | −0.06 | −14.46 |
Table 6 shows the summary statistics of the performance metrics by the agents and the ASX50 Index in 2022. This was a bear market, evident from the negative cumulative return of ASX50 (−2.14%). Both the A2C (−16.37%) and PPO (−4.70%) portfolios had lower returns than the index. The PPO portfolio was able to report the lowest maximum drawdown and annual volatility at −11.76% and 12.29%, respectively.
Table 7 shows the top gains and top losses in the PPO portfolio. The PPO agent had made trades with 37 out of the 45 available stocks. The agent made profit in 21 of them with an average of $392.69 per stock and suffered a loss in 16 of them with an average of −$341.74 per stock. The agent made 66 (40 buys and 20 sells) transactions over the 2-year period paying $1320 in transaction cost.
Figure 16 shows the cumulative return of the top loser JHX against the ASX50 Index. The agent bought 60 JHX at $45.31 on 22/02/2022 and it is down 41.96% as of 06/01/2023. The agent seemed to believe the stock will eventually rebound and still held on to the stock. Figure 17 shows the cumulative return of the top winner WTC against the ASX50 Index. The agent bought 85 WTC at $31.01 on 2021/07/05. The stock price surges by 47.60% over the next 2 months. The agent then sold the stock at $45.77 on 2021/09/01. The agent had not traded the stock again since.
Table 8 shows the top gains and top losses in the A2C portfolio. The A2C agent had made trades with 17 out of the 45 available stocks. The agent made profit in eight of them with an average of $826.03 per stock and suffered a loss in nine of them with an average of −$656.27 per stock. The agent made 19 transactions (17 buys and 2 sells) over the 2-year period paying $380 in transaction cost. As compared to the PPO agent, the A2C agent keeps a smaller portfolio and tends to hold on to a given stock for a longer period of time.
Figure 18 shows the cumulative return of the top loser TAH against the ASX50 Index. The agent bought 652 TAH at $4.90 on 08/06/2022. The TAH share price crashed significantly, dropping by a massive 82%. This hurt the returns of the A2C portfolio really badly and is a major reason for the A2C portfolio’s bad performance in 2022. Figure 19 shows the cumulative return of the top winner S32 against the ASX50 Index. The agent bought 1090 S32 at $2.72 on 2021/02/23. The agent held on to the stock for about a year and then sold 630 (58%) of the stock at $4.96 (82.35% increase) on 2022/04/13. The agent held on to the remaining 430 stocks since.
The A2C agent has the tendency to buy a stock and hold it for a very long term. In fact, it had only made two sells over the past 2 years but have built up a portfolio of 17 stocks. One was S32.ax as shown in Figure 19. The other sell was TLS as shown in Figure 20. The threshold value that was determined by the validation process might not be suitable for the market conditions in the testing periods, resulting in critical trades not executed.
In conclusion, we applied two DRL algorithms with continuous action space to tackle the portfolio management problem in the Australian market. In order to reduce the frequency and volume of trading made by the agents, we introduced a weighted moving average and a transaction threshold to determine whether the trade executed by the agents is necessary. The weighted moving average and threshold were able to help reduce the number of trivia trades made by the agent. The optimal windows for the moving average and the optimal threshold were determined to be 15 and 0.04 for PPO and 15 and 0.05 for A2C, respectively.
The trading agents were unable to outperform the ASX50 Index, but they were still able to somewhat capture the patterns of the market movement. The A2C agent was better at following trends and have the higher upside potential but can suffer from more severe damage during bearish markets. On the other hand, the PPO agent has the lowest annual volatility and the highest maximum drawdown, which is more helpful in a bearish or volatile market. The opposite conclusion was drawn in the study by Yang
The DRL still has many limitations when being applied to portfolio management. Some of the limitations include the following:
Data limitations: The DRL requires a large amount of high-quality data for the agents to be trained effectively. However, high-quality financial data is not easily accessible to the average person. The OHLCV data is very noisy and is influenced by many external variables that are not predictable. This makes it very challenging for one to develop an agent that can respond to any market conditions.
Overfitting: The DRL can be prone to overfitting, which can be evident in this study. The agents were able to trade and perform well in the validation period as well as the first half of the testing period. However, when faced with a situation, they will make incorrect investment decisions resulting in poor performance.
Black-box models: It is very difficult to interpret how the agents are able to come up with those decisions. In our study, we were able to identify the top losers of the portfolio, but we were unable to say for sure why the agent made such a bad trade. We could only speculate based on the retrospective information. This can make it hard to explain the agent’s decisions to investors, which is often more important than the results itself.
Real-world implementation: The simulated stock environment is built upon many assumptions and cannot fully cover the complexity of the stock market in the real world. There are many other hurdles that need to be overcome.
For future works, there is a limitation of using a fixed transaction threshold when deciding on which trades are necessary and which are not. It will be interesting to explore a more dynamic way to determine such a threshold. Second, many researchers have proposed including market sentiment into the model, which we agree is a crucial factor to stock trading. However, financial natural language processing (NLP) is non-trivial and a separate field of research on its own. It will be interesting to see if any breakthrough in the field of NLP will bring new perspectives to DRL in portfolio management. Lastly, an explainable artificial intelligence (XAI) set of frameworks can help understand and interpret the decisions made by the agents. It will be interesting to explore improving the explainability of DRL agents in portfolio management, which could give us new insights into how we approach trading stocks.
SN Ticker Company
1 AIA Auckland International Airport Ltd 2 ALL Aristocrat Leisure Ltd 3 AMC Amcor Plc 4 ANZ Australia and New Zealand Banking Group Ltd 5 APA APA Group 6 ASX ASX Ltd 7 BHP BHP Group Ltd 8 BSL Bluescope Steel Ltd 9 BXB Brambles Ltd 10 CBA Commonwealth Bank of Australia 11 COH Cochlear Ltd 12 COL Coles Group Ltd 13 CSL CSL Ltd 14 DXS Dexus 15 FMG Fortescue Metals Group Ltd 16 FPH Fisher & Paykel Healthcare Corporation Ltd 17 GMG Goodman Group 18 IAG Insurance Australia Group Ltd 19 JHX James Hardie Industries Plc 20 MGR Mirvac Group 21 MQG Macquarie Group Ltd 22 NAB National Australia Bank Ltd 23 NCM Newcrest Mining Ltd 24 NST Northern Star Resources Ltd 25 QBE QBE Insurance Group Ltd 26 REA REA Group Ltd 27 REH Reece Ltd 28 RHC Ramsay Health Care Ltd 29 RIO RIO Tinto Ltd 30 RMD Resmed Inc 31 S32 SOUTH32 Ltd 32 SCG Scentre Group 33 SEK Seek Ltd 34 SGP Stockland 35 SHL Sonic Healthcare Ltd 36 STO Santos Ltd 37 SUN Suncorp Group Ltd 38 TAH Tabcorp Holdings Ltd 39 TCL Transurban Group 40 TLS Telstra Corporation Ltd 41 WBC Westpac Banking Corporation 42 WES Wesfarmers Ltd 43 WOW Woolworths Group Ltd 44 WTC Wisetech Global Ltd 45 XRO Xero Ltd
EMA is a trend-following indicator that places a greater emphasis on recent data points. As compared to a simple moving average, EMA is more sensitive to recent price movements, and thus is more reliable and relevant. EMA can be expressed as follows:
MACD is a momentum indicator that shows the relationship between two EMAs, calculated by taking the difference between a long-term EMA and a short-term EMA. MACD can be expressed as follows:
Stochastic oscillator is a momentum indicator that compares the close price to its price range over a specific period of time. The stochastic oscillator can be expressed as follows:
OBV is a volume indicator that measures the buying and selling pressure of the stock, using the trading volume movement to predict the stock price movement. OBV can be expressed as follows:
VPT is a volume indicator that measures the strength of a trend based on the relationship between the trading volume movement and stock price movement. VPT can be expressed as follows:
Bollinger Band is a volatility indicator that is defined by two trendlines, each at two standard deviations away from a simple moving average (one above and one below). The Bollinger Band can be expressed as follows:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data will be made available on request.
This research received no external funding.
Written by
Article Type: Research Paper
Date of acceptance: August 2024
Date of publication: September 2024
DOI: 10.5772/acrt.20230095
Copyright: The Author(s), Licensee IntechOpen, License: CC BY 4.0
© The Author(s) 2024. Licensee IntechOpen. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Impact of this article
93
Downloads
130
Views
1
Altmetric Score
Join us today!
Submit your Article