Ask any investor how they select mutual funds (MF) – the most common answer tends to be on the basis of star ratings. Such is the allure of stars that it doesn’t appear to matter whose stars. I know so, because I spent my career in fund research. And no matter how different my firm’s or any other firm’s research methodology was, our wealth management clients always wanted the final recommendation to be in the form of star ratings – like that of Morningstar’s.
In this post, I don’t want to look at the merits of Morningstar’s star ratings. I simply want to remind investors and advisers that they need to look beyond star ratings, indeed any ratings. Here are some reasons why –
MF star ratings are not a classification system
Nowadays, star ratings have become ubiquitous as a way of customers and critics giving their opinions on a product or service such as movies, restaurants etc. This is very different from when hotels were given stars based on actual differences in features/benefits – Tourist hotels were one star, Standard ones were two, Comfort ones were three, First Class ones and Luxury hotels were five stars. Star ratings were a classification system.
Mutual fund star ratings are not such classification systems. A five star rated fund doesn’t have more bells and whistles than a one star rated fund.
MF star ratings don’t reflect asset class views
Most rating systems first create categories based on asset classes and geographies, and then classify individual mutual funds into the categories. The ratings reflect relative views of funds in that category. This makes sense. The ratings do not reflect the rating house’s view on the asset class – in fact, some rating houses don’t have asset class views at all.
So a five star rated technology fund in 1999 was not a recommendation to buy technology funds – it just meant, if you have decided to buy technology funds anyway, this is a good one. However, most retail investors don’t understand this.
MFs may be misclassified
The asset class issue sometimes gets more complicated because asset classes are arbitrary definitions. For example, the equity asset class tends to get sub-divided into large cap, mid/small cap and diversified. The intention is to differentiate between funds, which invest into blue chip stocks versus those that invest into smaller stocks, as size reflects risk. Diversified equity funds can invest across the size spectrum. But what do we do with funds that say they are large cap, but invest 20-25% in small to mid cap stocks?
If the fund gets classified as large cap, along with other funds that stick to only large cap stocks, then the fund will look to be outperforming when small/mid cap stocks are doing well. Kind of an unfair advantage, and possible case of mis-labeling.
Rating houses have different views on asset class classifications, let alone on the methodology. Hence, a fund can be classified differently by different firms. Worse, the classifications can change.
MF star ratings reflect past performance
Most MF star ratings, notably Morningstar’s, reflect the past performance of the fund. Now, there is a reason why regulators around the world insist on a disclaimer on any mutual fund communication saying ‘past performance is not a good guide to future performance.’ It is because a number of studies have shown this to be the case. Indeed, Morningstar itself now encourages investors and advisers to use star ratings only as a starting point and augment that with its analyst ratings.
So the obvious question is – why rely on star ratings based on past performance if you’ve decided not to rely on past performance?
MF star ratings are like brokers’ buy recommendations; the narrative is important
This is the most important reason why I was never in favour of publishing star ratings in the public domain. When you publish a star rating, or any other symbol, the reader tends to only look at that and not read the accompanying commentary, assuming there is one available. However, just like brokers reports that may say ‘buy’ and bury their reservations in the narrative, mutual fund researchers also tend to be more balanced in their assessments in the report.
My firm used to make our ratings and accompanying reports available only to wealth managers, so they had the responsibility to explain the ratings to retail investors. Our shortest report was one-page, which included a summary of our views and no quantitative data that no one understands.
I commend the Indian regulator’s directive to ban the use of ratings in mutual fund advertisements. But star ratings are still available freely on rating houses and media sites, without accompanying reports or explanations. The situation will probably get worse when mutual funds are available on ecommerce sites, as the regulator wants them to be.
I recommend mandating that no ratings are available without at least a summary report with a narrative that an ordinary retail investor can understand.
Mutual funds are a great way to invest for retail investors who don’t have time, inclination or expertise to pick their own stocks and bonds. World over, they tend to get ‘distributed’ via financial advisers or wealth managers who claim expertise in researching and selecting funds. However, given the dominance of star ratings, especially Morningstar’s, in driving fund flows, it appears the wealth managers end up relying on the same ratings – with not-so-good results.
Keeping aside the merits of Morningstar’s or any other ratings house’s rating system, my concern is the symbol doesn’t capture the rating house’s own research and views. I understand that symbols cater to investors’ short attention spans, but it doesn’t have to be so. I believe a one-page summary can address most of the concerns I have raised here.
In the meantime, investors, please don’t rely on star ratings to invest in mutual funds – for your own sake.