W Power 2024

'We Have to Earn Our Reputation Back'

Deven Sharma has spent the last two years reforming Standard & Poor's rating, governance and disclosure practices. Here, he tells Forbes India how S&P has coped with the post-crisis backlash

Published: Sep 1, 2009 08:46:10 AM IST
Updated: Sep 1, 2009 12:26:34 PM IST

Do you think that after the subprime crisis, there has been some introspection in the ratings industry as to the ways business is won, competed for and the degree of rigour in the rating process?
The answer is, absolutely. In any business, when you go through situations like the one we have, you have to reflect on what you need to do differently. We stepped back, we thought about it and two or three principles sort of guided our direction. One was that we were going to set our own standards to guide the change in the business, in the way we interact with the business, the way we did our analytics, the way we ran our processes. We need to bring more confidence back into our rating. We know that.

Second one was that we were going to centre ourselves all around the investors. Because, ultimately, the success of the ratings comes because investors feel we have a credible benchmark that they can use in their investment decisions. So we have completely centred on making sure that the investors see a value-add in the analytics we provide and the benchmark that we provide in their risk assessment.

India-born Deven Sharma took over as the president of the world's oldest credit rating agency, Standard & Poor's, in 2007, just when the subprime mortgage crisis was balooning into the world global recession in living memory
Image: Gautam Singh for Forbes India
India-born Deven Sharma took over as the president of the world's oldest credit rating agency, Standard & Poor's, in 2007, just when the subprime mortgage crisis was balooning into the world global recession in living memory
And the third principle we have adopted is the principle of transparency — that we must be transparent not only in our analytics but also be transparent about how we do things and how we operate. And the fourth is accountability. It is on these four principles that we have focussed on a number of changes.

We have made changes into our analytics; we have made changes into our governance, added a number of checks and balances; we have made changes into the amount of information we disclose on our analysis and our assumptions. And lastly, we have made changes in our education programme both for the investors and for our analysts.

And that has led to changes in the processes, roles and responsibilities that people play. For instance, we have always had a separation of analytics versus commercial. We have in fact even tightened up that more.

If you look at our business today to what it was a year ago, versus two years ago, it is very different in the way it operates. In a year from now, it will be in fact even more different, driven by the changes we have made, by some of the changes that the new regulations have required of us and some of the changes the market has come to expect of us.

Prior to August 2007, when you took over at S&P, what was wrong with the ratings industry?
You have to look at this at multiple levels. One level is, first, what we could have done differently. Clearly, our assumptions around the housing related securities — whether it is the housing pricing or is the correlation that we did — did not pan out. So, there was a disappointment. The question we asked was could we have projected such severe declines. And our conclusion was it would have been very difficult to have projected such severe declines. But what we could have done was make our assumptions of stress test that we were doing a lot more transparent. That, we could have done. So, the market could have responded through a debate to tell us whether the assumptions were severe enough or not severe enough.

Secondly, the market has come to expect a certain stability in the higher rated securities and we have now incorporated stability as a critical factor in our analytics in our ratings.

What exactly do you mean by stability?
What we mean is there are some structures and structured securities that are inherently more stable than other securities. And historically, we have only focussed on the default levels. But now we are saying if there are two instruments with the same default levels but with different stabilities in their structures, their ratings would be different too.

At the macro level, clearly there are questions about how liquidity dried up so quickly and that drove the crisis around. There were many factors — over leverage, liquidity, mark-to-market accounting… But the fact is how severely the crisis came and how it took on the pace and level which it did is something we all have to reflect upon.

Do you think the rating agencies failed to take into account the impact of liquidity and how it can suddenly evaporate?
If you ask could we have anticipated that liquidity could suddenly evaporate and a major event bring everything into a halt, clearly, that did not happen. If it did, we would be in a different place.

Do you think the investment banks held rating agencies to ransom to some extent? They were pitting one against the other in terms of the fees they pay and you had to compete to get business.
It is understandable how people make that statement but that is not true. The question you should ask is what happened in terms of assumptions. And there were not too many people in 2005 that could have predicted housing price decline of 30-40 percent, a national housing recession in the US, a housing recession in the US, in Spain and many other parts of Europe. It is almost a little bit difficult to have predicted that.

What are the changes you have made to the analytical process of ratings?
First, we have started by clarifying what ratings is and what ratings is not – the fact that it is forward-looking, it is relative and in addition to default levels, it focuses on priority of payment, stability and recovery levels.

Secondly, we have also committed to making to our readings comparable across asset classes, across geographies and across time. So when you have a Triple A, there should be confidence that the Triple A means the same thing whether it is structured [paper] or corporate [paper]; whether it is India or China. We have now made that commitment and we are trying to recalibrate our criteria to be able to do that.


Will this recalibration lead to vastly different results in rating?
Sure. It will. For each rating category, we have defined a specific stress level that any asset class must meet. For example, a Triple A must meet an extreme stress level. In this case, we have taken that extreme stress level to be the Great Depression. So, unless the economic environment starts to move towards a Great Depression-like scenario, all Triple A paper — whether it is corporate, structured, sovereign or municipal — must meet that outlook.

And on the other hand, a Single B will have much milder rating criteria. So, by applying these common economic stress scenarios, we are trying to be comparable.

The rating outcomes will change as we recalibrate the criteria. For example, we have recently recalibrated the criteria for all commercial mortgage-backed securities (CMBS) in the Us. We have made it clear that there would be downgrades of super-seniors and super-dupers because of the recab — both because of recalibration of criteria and because of deterioration in performance.

There are other changes to analytics. We are also making sure that the scenarios and assumptions that we are using are made transparent and that we are disclosing to the market place. Historically, we would have kept those assumptions, not made it as transparent we are now. We are also saying, in some cases, we want to be able to disclose the underlying collateral.

What about changes in governance?
There are many changes. We have added a number of checks and balances. We have always had a separation between analytical and commercial activities like you would have in your magazine. But we have completely reinforced that. Secondly, we have a quality function and a criteria setting function that are separate from the rating function — all the three different functions now reporting separately. On top of that, we have created a risk management function, so that if market risks are going up, this risk management function is looking at it and saying OK, how will that impact the ratings.
We also have appointed an ombudsman. Anybody can call him.

We have also launched an analytical certification programme with New York University’s Stern School of Business. It is again to bring a commonality across the whole organization globally and also reinforce people’s mind that we have a good awareness. Before 2007, we were doing training, but this is with a lot more rigour.

Do you think the role of rating agencies has been misunderstood by the critics?
There was a number of people that did think about ratings as an investment recommendation. And we think our role is to make sure that people understand rating is really a reflection of creditworthinness. It does not talk about the quality of investment because it does not look at pricing, it does not look at volatility etc. So we must continue to educate the market place because it is in our interest that they understand what ratings is and what are the limits of that ratings.

Has it been a personal battle for you to bring about the changes in the last two years?
I think when the role we play, the purpose we play is clear to people, it becomes somewhat easier. It becomes somewhat easier also when people understand that is what the market expects and this is not just an internal whim and fancy.

You spoke about stability. What do you think should be the future of instruments that are inherently volatile?
We have certainly looked at financial instruments that we have rated and analysed them and asked why they were so unstable in some cases. If they were structures built on market value, by definition, we now recognize that they are more inherently unstable. We have been looking at those kind of things and saying how do we tackle that.

Newer and newer products get structured in the market place. How is it a challenge to keep pace with that? On the flip side, is it possible at all to have the depth of information to rate products when risk is sliced, packaged and sliced again. How do you make sense of default happening at the far end of the chain?
Clearly, we focus on where the risks lie and how we want to think about it. That is what forms the criteria for us. What is the scope of the risk we are going to measure and what are the analytical critera that enable us to measure it.

Do you think some instruments will have to go because they are inherently unstable?
That is true. One of the lessons learnt is that there are some instruments that were too market-value dependent or they had triggers that the management could pull and if the management can pull a trigger they may take a default, might devolve and get the benefit of that. And so, those are are the kinds of things we have to stay on task.

Can you name some of those instruments?
CPDO is an example. It is market-value based. [CPDO or Constant Proportion Debt Obligation is a controversial credit derivative that contains a lot of market risk rather than the credit risk that ratings agencies measure. Many CPDO issues plunged during the crisis and brought a bad name to the rating industry.]

What are the challenges over the next one year?
Many regulations are coming up and that will drive change. This is on top of things we have already done.

And do you think rating agencies will emerge unscathed from all this?
If you look at how much of a hit our reputation has taken, I don’t know what unscathed is. Certainly we are under pressure, under the attention, under the spot light. And we have to earn our reputation back. And we will do whatever it takes to regain our reputation.


How rating agencies got discredited
Wall Street underwrote $3.2 trillion worth of home loans to people with bad credit history and unverified income between 2002 and 2007.

Investment banks pooled much of that debt into structured products and then “shopped” for the best ratings for these instruments.

Top credit rating agencies — Moody’s, Standard & Poor’s, Fitch — assigned top-notch ratings (e-g AAA) to these products, ignoring the quality of borrowers and defaults. Critics said the agencies lowered credit rating standards due to competitive pressure and the fact that they earned their fees from i-banks that sold the products. The disclosures were inadequate too.

The instruments turned to junk in the aftermath of the subprime crisis and the rating agencies lost their credibility. The US, Europe and many other nations are now imposing stiffer regulations on rating agencies.

The new rules of behaviour
Don’t promote complex structured products that are inherently volatile and contain a high level of market risk.

Keep the rating team separate from the sales team to avoid conflict of interest. Don’t rate structures you helped to create.

Decline business if asked to rate favourably.

Disclose the assumptions behind a certain rating, underlying collaterals. Be transparent.

Make ratings comparable across geography, time and asset classes.

Monitor the external environment more closely and change ratings accordingly (for example, when market liquidity comes under stress.)

(This story appears in the 11 September, 2009 issue of Forbes India. To visit our Archives, click here.)

Post Your Comment
Required
Required, will not be published
All comments are moderated