Risk Controls and Subjectivity in Banking

The aggregation problem in risk

Increased automation or adoption of high-dimensional statistics (machine learning methods) for arbitrage opportunities and asset allocation has been hailed as the new era in banking. To handle obvious challenges these bring, some (Andre Kirilenko [6]) have gone as far as suggesting algorithmic controls for regulation of algorithmic trading activities. The real challenge with regulation of algorithmic trading – in my opinion – is not different from that with regulation of manual trading. A solution should therefore be to gather more data on decision making rather than inspecting methodologies alone.

No matter which risk-measures are adopted, the financial decision at a bank is hardly a transparent process. This is largely due to structural reasons. The regulatory controls follow a necessarily fragmented approach since goals of the business units within an investment bank (desks or divisions) are varied. The view of the overall risk at a bank is a sum total of an entire suite of models varying across several desks at a bank. While model validation teams go through the painstaking task of validating all the model – the undertaking of an overall risk methodology is eventually about assimilating varied views tailored to the needs of the particular desks at a bank. The issue that no algorithmic control can address is that the aggregation to an overall risk is essentially of subjective nature.

Differences in perception of risk across banks is clearly not in the interest of policy (as it creates undiversified risks) but it may be in the interest of banks as such differences create risks which banks can attempt to mitigate for the clients. The task of regulating banks – made difficult due to the labyrinthine models and datasets at a bank – might actually be simplified with improved reporting and transparency if the banks end up relying more on automation. A shared and transparent view of long-term macreconomic risks – in my view – is a win-win situation for everyone.

Let me also emphasise that I am not a subscriber of the banks-are-evil camp and the reason for this fragmented view is structural. The non-uniformities in risk perceptions arise out of unknown traits of the clients that are only available to the respective desks at the banks. This information asymmetry is at the core of banking as a business. The trading or investing behaviour at trading desks is driven by any or all of i) the time-horizon of the investments ii) the type of client (if applicable) iii) the holding period of the particular type of products and iv) the market data relevant to the security. A fragmented view of risk is necessitated as every desk resorts to managing risk it in its own way.

The side-effect of such a segmentation of risk – is that there may be an insufficient market-diversification of the overall high-level risk undertaken by a large investment bank. Since a large investment bank engages with varied (nearly all) sections of the industry and it is often that only the bank has the information to separate clients who take long-term risk from those who take short-term risks, an inherent information asymmetry arises in favour of the investment banks. Consider for example the task of managing market risk associated with an equities portfolio at a bank. The equities desk is typically detached from the credit risk functions – which analyse the factors undermining the portfolio with a set of inputs different from what market risk may be interested in. If we were to understand the utility that the desks receive in a behavioural framework, the credit risk functions may elicit a utility under risk where low probabilities of loss are attached with a high amount while the market risk functions may attach a (relatively higher) probability to the portfolio’s under-performance. The probabilities inferred from historical volatility in the market risk division may be disconnected with the default probabilities that the bank may obtain from a third-party. A fragmented view of risk is evident as every desk resorts to managing risk it in its own way. While the usage of market-data by the banks for “risk” purposes is hardly uniform – the effect of their own private factors on the aggregated view of risk remains unaddressed. The different perceptions of risks within a bank create a private undiversified risk for the bank.

A Behavioural view of Aggregation

The mere admission that the allocation problem is subject to a subjective view of risk could help us understand how “private” factors could aggregate to a higher level of risk observed by central banks, regulatory agencies or those with a long-term view of risk. Viewing risk-incentives with Prospect Theory under information symmetry (i.e. bank knowing more about the client needs than the authorities) might help us better understand the incentives for market participation and for maintaining the varied notions of risk within large investment bank. That savvy investors at financial institutions do not have utility curves under risk different from retail investors has been wonderfully demonstrated by Abdellaoiu et al [8].

Recall that the core claim of a PT (see Tversky Kahneman) utility is that actions of individuals and firms alike are shaped by the perception of their future i.e. the probability of outcomes. In a subjective framework, the firms and institutions are assumed to be better equipped and more responsive than the individuals, and their view of risk is necessarily different from that of individual entities. The structured product salespersons and high-frequency traders all necessarily have a different perception of the same risk – which is associated with the entire market whose long-term trends the regulatory bodies may observe. In terms of the model, stochastic microevents are aggregated at different levels at organisation as they determine the subject probability at each level.

A large investment bank engages with varied (nearly all) sections of the industry and the issue of information asymmetry which it implies can be incorporated in this model as well – since the clients who take long-term risk cannot be separated from those taking short-term risks by nobody but the bank(s). The bank’s subjective view is different from that of the regulatory bodies. For example, if an investment bank B has clients P and Q so that P is client in the tech sector (which is in a high risk environment with a high chance to go bust in the next year) and Q is a client in the mining industry (which may be as stable as the country where the mines are). As a regulatory body is usually less aware of the needs of clients than the banks, the regulation may at best assume that the two types of clients are being treated the same way at a particular bank – as far as their risk profile at the bank is concerned. The subjective probability used by bank’s risk management team is necessarily different form that of the regulator.

A model for how such flows of information could quantify the incentives towards sharing of insider-information. Recall that the high-level PT utility can be expressed in the form:

u(x,p) =\sum_{i}w(p)\upsilon(x)

Here, w(\cdot) is a probability weighting function and \upsilon(\cdot) is a value function (see Kahenman-Taversky for PT[5]). Both are aggregated over time. Assuming that the value function is the same across the desks (there is no reason to believe that a dollar gained from a trading desk A should be viewed differently from another trading desk B) – the goal in the empirical analyses would be to elicit the weighted probability function parameters that “explains” decisions based on optimisation of u(x,p) in the data. The time horizon, type of trading (intraday, volatility etc.) and the holding period – all influence the formulation of probability and the parameters of this weighting function. The focus is to understand how the information required for activities of the bank are aggregated from the information that is available to each layer.

This behavioural view could help us understand if there are enough incentives from sharing information about risks through the markets when there are disparities in risk perspectives. If there are not sufficient incentives to participate in the market, then the differences in risk-perceptions may be a necessary evil that may sustain long-term risk in the bank. Incentives for market participation could develop if increased transparency on risk-aggregation is provided to regulatory bodies instead of focusing all the attention on details of models used to price securities in books run by each desk.

References

1 “Innovations in Finance with Machine Learning, Big Data and Artificial Intelligence”, J.P. Morgan Quantitative and Derivatives Strategy (2017).

2 Douglas W. Diamond and Raghuram G. Rajan, ” Fear of Fire Sales, Illiquidity Seeking, and Credit Freezes *”, The Quarterly Journal of Economics 126, 2 (2011), pp. 557-591.

3 Martin Evans and Richard Lyons, “How is Macro News Transmitted to Exchange Rates?”, Journal of Financial Economics 88 (2008), pp. 26-50.

4 H. Joel Jeffrey and Anthony O. Putman, “Subjective Probability in Behavioral Economics and Finance: A Radical Reformulation”, Journal of Behavioral Finance 16, 3 (2015), pp. 231-249.

5 Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision under Risk”, Econometrica 3, 47 (1979).

6 Andrei Kirilenko AND Albert S Kyle AND Mehrdad Samadi AND Tugkan Tuzun, “The Flash Crash: High-Frequency Trading in an Electronic Market”, Econometrica 73, 3 (2017).

8 Bruno de Finetti, “La Prévision: Ses Lois Logiques, Ses Sources Subjectives”, Annales de l’Institut Henri Poincaré 17, 1 (1937), pp. 1–68.

9 Mohammed Abdellaoui , Han Bleichrodt and Hilda Kammoun, Do financial professionals behave according to prospect theory? An experimental study, Theory and Decision (2013) 74:411–429

Posted in economics

Terminology and politics

Politics is hardly what this blog is concerned with but an issue with nature of semantics that recurs in discussions on politics has seemed worth some attention in this post.

Having lived in Asia, Europe and North America, I am interested for example – how the word socialism is used across different geographies. In some places, the word socialism represents the post-war collectivist policies and it appears to be accepted as a good thing over all. But at other extremes, the word is detested by the very people whom socialism is meant to champion (see for instance this chart of how the democratic party with a “socialist” agenda is perceived in the US – https://www.bloomberg.com/graphics/2020-election-trump-biden-donors/ ).

What’s at play here is not merely the differences arising out of varied experiences with socialist movements of the early 20th century that have shaped views on socialism. The variation in notions of socialism is also across time – as the successive political movements have adapted the word to their needs over time. The point I am trying to make is that political terminology is subject not only to variation of contexts across geographies but also to perception through generations within the same cultural context.

To demonstrate that there is no easy way out of this problem (and why it must exist), let’s imagine an extremist structural linguist who ensures the abolishment of a word after its defined purpose has been served (I know that tying a word’s semantic meaning to its etymology has never worked in history of languages but bear with me for the sake of this argument). If we trace the history of socialism back to – let’s say Marx – it may be evident that many of the things which Marx would have been content with had arrived at the heart of capitalism with the New Deal soon after the war. But undoing a word from human memory is neither possible nor desirable – particularly when the word being used represents a utopia.

So at no point could a structural linguist stop the usage of the word socialism. It is not surprising thus that the word is interpreted as collectivism in economic matters and anarchism in social matters – being used to achieve varied political goals with promises of political utopia. It is also unsurprising that opposing political movements have incorporated each other’s policies using the word socialism as they pleased. In the end, the socialism of Marx is different – to any observer – from that of Lenin – whose socialism in turn is much different still from that of Obama. To use the word socialism in a serious discussion on economics is therefore an open invitation to confusion.

Not all political terminology meets such a fate but I take note of the issue only to remark that economists are careful to stay away from such all-encompassing loaded terminology. An economist may observe the living conditions of the erstwhile working classes have risen after the war – and that collectivism does not appeal so much to many of us any more. But she is careful not to attribute poverty or success to political terms such as “socialism”. It is easy otherwise to get carried away with loaded terminology – which creates its own assumptions that influence conclusions.

Debate is an essential part of economics but so is the use of precise terminology. A paper from Manski on educational policies is an instructive example of how the limits and successes of markets can be understood without letting any political views shape one’s outlook. Elsewhere, Manski has also advised against the use of words such as “social capital” – which are subject to interpretation and leave an analysis open to adjusting observations and measurement criteria. Indeed since our languages are not logical structures, we may never completely get rid of biases inherent in terminology but it’s well worth pinning the definitions precisely in an economic analysis and state our known biases beforehand.

Tagged with:
Posted in Uncategorized

cost functions and the appeal of duality theory

Given the tractability of logarithmic functions, it should not be surprising that the Cobb Douglas utility functions are liked by everyone. But in the context of consumer economics, the econometricians (particularly those who have been inspired by Deaton) have hailed cost-functions as the more tractable and yet more powerful choice. These are indeed what lead to the popular AIDS method for estimation of elasticities.

I have come across the need to find the cost function for the utility u = \beta_{i} log(q_i-\gamma_i) (where \sum{\beta_i}=1 ) so many times, that I thought it is worth highlighting the corresponding cost-function and comparing it with the cost-function that comes from PIGLOG utilities. Readers who have gone through Deaton’s classic textbook – Economics and Consumer Behaviour would know this from the very first chapters of the book.

If the income constraint is x=\sum_{i} p_iq_i = \textbf{p}\cdot \textbf{q} , the optimisation using a Lagrangian is straightforward : L = u - \lambda (\sum{p_i q_i}-x). Setting first-order conditions, we have

x=\sum{p_iq_i}

\frac{\beta_i}{q_i-\gamma_i} = \lambda p_i.

Now, using \sum{\beta_i}=1, we can write \lambda(x-\sum{p_i \gamma_i})=1 and therefore, p_i( q_i - \gamma_i) = \beta_i (x-\sum {p_i \gamma_i}).

So far we have the Marshallian demand function. Cost function is the inverse of the indirect utility function. So we write the indirect utility by sticking in the optimal p_i q_i in the utility function u as u = \sum{\beta_i} log(q_i-\gamma_i)= \sum \beta_i log(\frac{\beta_i}{p_i}(x-\sum{p_k\gamma_k})). Again, using \sum {\beta_k}=1, we can write u=\sum \beta_i log(\frac{\beta_i}{p_i})+\sum \beta_i log((x-\sum{p_k\gamma_k})) \Rightarrow u = \sum \beta_i log(\frac{\beta_i}{p_i})+ log(x-\sum{p_k\gamma_k}) .

The cost function thus becomes:

log(c(u,\textbf{p})-\sum{p_k\gamma_k})= u - \sum \beta_i log(\frac{\beta_i}{p_i})

The PIGLOG functions are actually more general than above. Based on the rather simple assumption that the utility scales up linearly with the logarithm of outlay log(x), i.e. u = \frac{log(x)-a(p)}{b(p)-a(p)}, the cost-function can be written as log(c(u,\textbf{p}) = a(p) (1-u) + b(p) u. Deaton presents a very intuitive view of this cost-function as one involving the costs of subsistence (u=0 i.e. a(p)) and bliss (u=1 i.e. b(p)). In this form, the cost function above would be written as:

log(c(u,\textbf{p})-\sum{p_k\gamma_k})= (1-\sum \beta_i log(\frac{\beta_i}{p_i}))u +(-\sum \beta_i log(\frac{\beta_i}{p_i}))(1-u)

The intuition for duality shows just how restrictive Cobb-Douglas like functions really are. The strong claim here is that the price derivatives of cost-functions (which are the budget ratio) are the same for both subsistence and bliss sub-utilities. The real breakthrough with AIDS – developed by Muellbauer and Deaton (relying on the theoretical work by Gorman, WM) is the ability to provide a general yet tractable cost-function.

While we find it easy to speak in terms of the direct utility in plain English (the consumer gets more utility from bread than from vegetables etc.), the equivalent dual form actually has far more advantages. A job that the book on consumer economics by Deaton does really well is to explain the wide literature in microeconomics in terms of the duality theory. Indeed the cost-functions can be used to make arguments that are no less intuitive than what we’re used to with direct utility functions. The textbook is one of those rare books that introduce a rather subtle novel concept while still covering the wide span of the topics in consumer economics.

Just the way the direct utility function with a budget constraints tells us that a higher price would make it difficult to obtain the same utility (comprising of \textbf{q}=\{q_i\}), the cost-function tells us in a less convoluted way that a higher price would cause a consumer to get less utility for the same cost (x=c(u,\textbf{p})).

In an example slightly more complicated than the relative preference between apples and oranges (or taxi and car rides) from the introductory courses in consumer economics, we may look at the intertemporal substitution problem. Here, a consumer selects between availing c_t which she consumes in the period t and increasing asset account A_t (which would accrue over a life-time and probably grow at a rate r) by adding \Delta A_t. The log-function approach would tell us that the price sensitivity of sub-cost-functions for bliss and subsistence ought to be the same. This is less problematic than the apples vs oranges comparison because we’re talking about the saving rate of the consumer (choice between consuming in the current period vs saving for future) which might arguably be the same at subsistence and bliss levels. But if we relieve that assumption, then a more general cost-function seems more appropriate. If we assume that the consumer budget ratios are more A_t-heavy at subsistence levels (one desperately wants to be richer when one is poorer) but more c_t-heavy at bliss levels, then the logarithmic formulation (Cobb-Douglas like) is no longer appropriate.

While an appropriate cost-function does provide a more general demand function, do remember that for it to lead to an econometric method, the conditions of homogeneity of the demand function, symmetry of its derivatives and non-negativeness of the Slutsky matrix must also be fulfilled (these conditions applied are indeed what lead to the AIDS methodology). These issues become necessary to consider when derivatives of the cost function are equated to budget ratios in the estimation.

Setting aside the estimation issues for now, we may want to introduce the assumption that the consumer’s inclination towards assets A_t vs c_t changes based on the value of A_t. There seem two ways to model this – first is to argue that the price derivatives depend on the values of A_t as well instead of on the prices \textbf{p} alone. Second is to argue that prices themselves change based on the value of A_t. I prefer the latter approach as it is more realistic without imposing significant restrictions – prices of assets that individuals face vary based on how much assets they already have. The better opportunities for future wealth tend to be limited to those who are already wealthy. The heterogeneity of prices is also supported by the observation that the poor live in areas where bliss lower (overall) since it is not possible to obtain utility as high as where the rich live (b(p) is high). On the other hand, there may be a big-fish effect as well, as someone not so rich might want to stay in a poor neighborhood to feel like having higher relative income (this is unlikely to happen at lower levels of subsistence).

How peer effects and poverty play out in the preference of saving vs consumption is a topic worth exploring in itself – but the heterogeneity of prices highlighted above raises another interesting question on how to treat unavailability of items within a commodity. You could find an expensive Korean restaurant in London or New York (in the right neighborhood/island anyways) but most small town residents in Europe and America are deprived of even a decent Bulgogi. Does that the mean the price of bulgogi in Oberbierbach is infinite? One could indeed import items – but that would means import prices having an influence on the consumer choice – a concern that seems misplaced. Even if a bulgogi isn’t available in the small town where I live, I know the utility I am going to get from bulgogi without considering its price. Like me, most consumers who live in the small town possibly couldn’t care less for a bulgogi.

The point here is that in any real-life dataset, prices and needs are faced by a consumer – who makes choices thereafter for “commodities” rather than items. Thus the prices of commodities in our model for utility are necessarily localised. In other words, an adjustment of prices must always be informed by the local conditions. Such a consideration of utility and local prices for commodities incorporates peer effects (locality of prices) in consumer utility. The life-cycle effects could also be introduced in the above intertemporal substitution. Here too, cost functions need not be shunned in favour of a direct utility approach.

Posted in Uncategorized

Subjective probability in economics

In a brilliant paper, authors[1] summarise De Finetti’s views of subective probability. As someone who had been content with the Laplace view of probability – relying on equivalent choices (see Richard Jeffrey’s “Subjective Probability: The Real Thing” for a discussion of the two perspectives), I find the subjectivist view particularly interesting.

In fact, it would not be unfair to say that I am completely sold to the idea of subjective probability. My objectivist stance ran into several problems much before I had embarked upon subjective probability but I had hoped that with practice alone I might be able to better understand the notion on probability. More specifically, while I never had problems with toss of a coin being associated with a ratio 1/2, I was uncomfortable using that theory to understand how people gauge their confidence in the future. I could see that what everyone does is guessing – but my objectivist view had no other option than assuming a certain distribution of possible future points before the analysis. Keynes also seems to have registered this problem. While his methods may not have used measure theoretic interpretations of probability, his approach favoured the subjectivist view of probability.

The trouble in economics – I think – is that we observe systems that change with the perceptions of the agents in the system. Whether people perceive immigration as a benefit or not may matter to the policy than the past contributions immigrants have made. While a true objectivist could treat perceptions as some inaccurate observations of the objective truth, almost anyone else would favour a world view that observes some truth in sentiments. I could go further saying that a perspective that views institutions in an economy as an equilibrium of human sentiments is a more accurate one than one where they are seen as a balance of objective truths.

That humans fundamentally don’t behave rationally is evident from the very many paradoxes that behavioural sciences have explored. If we can agree that the probability is subjective to the agents in a model of economy, a subjective view of probability makes more sense to a modeler. Since individual agents base their decision on some perception of risk, these probabilities ought to be incorporated in our models. I think de Finetti’s definitive work on probability theory [1] does provide an axiomatic framework for subjective probability that allows us to employ econometric methods to handle biases and perceptions directly.

Within the realms of economics, no one might have contributed more to the popularisation of De Finetti’s work in economics more than Savage – who formalised the consequence and probabilities with acts (affectionately referred to as a “Savage” act).  While Savage wrote his treatise on probability with a hope for minimax theories, he later leaned towards a Bayesian perspective – realising that subjective probability must lead to Bayesian methods. Richard Jeffrey[4] explains this convergence with an emphasis on conditional probability.

If data has to bear a notion of dependence on probability, then it would be meaningless to imagine that there exists a probability that is independent of fundamental revisions in the assigned probability . Instead of assuming an ontological notion of probabilities – Bayesian probability only manages a way to stay incorrect about the actual notion of probability (Jeffreys[4] points out that the original Latin word – errare means to err about/wander rather than to commit a mistake). Unlike a frequentist, the Bayesian doesn’t need to believe (while drawing inferences) that she is sampling the infinite limit probability in her experiment. The Bayesian only manages her error to stay consistent with the data – a perspective that de Finetti spends much time justifying in his “Previsions” paper[1].

From an economist’s perspective, Savage’s view of subjective probability (which he calls “personalistic” probability) seems more pragmatic than the formal treatment offered by De Finetti (e.g. the proof of the law of large numbers which allows the theory to converge with the conventional views on probability such as that from Poincare). The true motivations and extent of de Finetti’s assertions about uncertainty may still be a subject of research in the history of mathematics, but his exchangeability theorem forms the basis of Bayesian algorithms in use today. I think economists could benefit immensely from use of Bayesian econometrics in their science.

References

  1. De Finetti on uncertainty, Alberto Feduzi, Jochen Runde and Carlo Zappia – Cambridge Journal of Economics 2014, 38, 1–21
  2. In simple words, exchangeability means that our notion of subjective probability based on past experience does not rely on the particular order of events. In an example of coin tosses, for example, while someone tossing a coin (results:Heads/Tails) may assign a subjective probability of higher than .5 to the probability of the appearance of Heads if she observes 9 heads in 10 tosses, there is no reason to think that she needs those 9 out of 10 heads to occur in a particular order.
  3. Leonard J. Savage – Journal of the American Statistical Association, Vol. 66, No. 336 (Dec., 1971) – Elicitation of Personal Probabilities and Expectations
  4. Richard Jeffrey – Subjective Probability: The Real Thing
Posted in statistics, economics

Utility curves and classroom economics

Mathematicians are quiet people. Instead of arguing about their case, they are more often busy thinking about the boundary conditions of their assertions. In the rare moments of speech, they come across as inflexible and pedantic to many.

Although married to mathematics, economics seems to behave differently. Economists discuss a lot and even all non-economists seem to have an opinion about the economy. People arguing about rational expectations need not recollect the expectations formula in their undergraduate math. The students of economics could easily talk about economic variables – capital, disposable income and utility as if they were real, measurable entities – things that are believed to exist beyond the models in the classroom.

But reality bites those of us who choose to work with data. Data measurement errors affect some variables more than others and some variables always remain proxies. The early researches who talked of utility – Houthakker, Prais were for example – unequivocal in pointing out that the average consumer doesn’t actually exist except in the discussion of the models. While there is nothing wrong with a discussion on the right policy to shift consumer’s utility without worrying about practical considerations, it may be disastrous to forget that our economic conclusions often rely on certain simplifications of reality.

A specific problem that I have been concerned with is the lack of prices survey in the consumption microdata. We tend to know details of what consumer bought- but often the estimate on prices isn’t available. One way to get around the problem is to ignore that the prices exist and assume that consumers within the same geographical area (or what class or whatever parameter that clusters price) all face the same prices for the same purchase out of the same consumer basket that is available to them. In fact, the researchers often use the unit-values as prices in the model. The measurement error that this results in is mentioned with great detail by John Gibson and Bonggeun Kim  in their paper “Quality, quantity, and spatial variation of price: Back to the bog”.

Considering such errors, is it worth exercising caution in the classroom discussions of utility curves. When prices themselves are approximations for the items being consumed, the accuracy of model prediction is likely to suffer. Yet this practical limitation is often sidelined in favour of discussions on utility. What does utility mean – if the items (apples, oranges etc.) are not uniformly available to all the consumers and if their prices are not recorded in a survey. Asking students to calibrate utility curves before learning about utility is akin to putting the cart before the horse, but admitting that such problems shape our view of utility could help us bring a humbleness that working with real-world data often requires.

Posted in statistics, economics

Can positional goods exist in a developing country?

Economists have explored relative prosperity as a determinant of personal happiness for quite some time. A challenge to the rational expectations theory appears when despite being richer in absolute terms, a young adult in a poor urban neighbourhood in the developed world may be unhappier than the elite in many developing countries. Discussing necessities in this context of relative poverty, Sen, for example, points out that the television is a need for the school education of a British child in a way it isn’t for a Tanzanian child[4]. The resolution of relative poverty therefore does little to address absolute poverty (and vice versa).

The resolution of relative poverty is nevertheless important – if not for the evasive goals of happiness then for the risk-aversions and social equity that severe economic inequalities can bring. Taking a centrist position in a provocative book[19], Robert H Frank argues that status competitions are ingrained in our hormones. Exploring incomes amongst professors competing for grants – he notes that neither the Permanent Income model nor the productivity models explains the distribution of income. What offers a better explanation is the presence of status competition among the professors[19].

Focusing primarily on crowding in the Western society, Fred Hirsch had given similar forewarnings of the effects of intensifying status competitions [7, 10]. In modern society, he argues, the needs had expanded manifolds since the time when Adam Smith’s invisible hand was known to work. In Europe of the eighteenth century, whereas the rich could pursue their interests while the poor gained mobility in exchange of their participation, in modern times, through immense success of capitalism in the twentieth century, this exchange is no longer appealing[3, 11]. In the absence of social mobility to offer for exchange, the distribution of resources was to occur through status competitions – a post-war process where positional goods and advertisement had a major role to play.

While Hirsch does seem to engage some in prophetism as he warns of rationing of living spaces and other public goods through status competitions, there are two key phenomena that are relevant to the mechanics of status competitions – scarcity and congestion. With physical scarcity of goods largely conquered (food and amenities), scarcity appears largely social in the developed countries. More commercialization leads to more scarcities and more competitions – hence congestion for goods provides a measure of the degree of social scarcity.

In my opinion, the difference between physical and social scarcities seems unclear in his analysis. That is not just because it is a difficult classification problem but also because, according to Hirsch, the boundary between physical and social scarcity can get blurred by the positional goods creating a market for themselves(?). In context of the developing world, the creation of scarcity through “overuse” (which Hirsch stresses as the engine of social scarcity) is less relevant as the congestion for goods is less often for status needs (physical scarcities are severe in the developing world). That said, with recent developments in the underdeveloped world we are more likely to see a mixed effect of commercialization and physical scarcities with an increasing effect of status competitions. The distinction between physical and social scarcity is less relevant in this regard.

Frank’s interpretation of social scarcity provides a more pragmatic view. In Frank’s model, social interaction amongst participants is a proxy of congestion. Focusing primarily on income distribution, his axiomatic claim – that in the absence of monopolies, corporations cannot survive by rewarding talent alone and are thus compelled to depend on status competitions for income distribution – provides a microeconomic illustration of social scarcity (wherein attributes such as workplace safety get overpriced because of status maximization goals). The role of social interaction is equally relevant in the developing countries where industrial development and societal competitions have interacted and clashed very recently(See Section4.2↓).

Like Schumpeter, Hirsch had also viewed industrial revolution as a legacy of liberal capitalism – a race amongst the middle classes to achieve the higher social positions once held by the feudal elite[7, 11]. Developments of the last century in Africa and Asia bear similarity to this phenomenon where a new working class has clashed with feudal and colonial systems of the century before. The growing status competitions amongst the nascent working classes have been a subject of sociological study. In India of the 1950s, this competition was termed as Sanskritization when erstwhile lower classes emulated higher social classes with newly acquired economic freedoms[13] .

Incidentally, both Hirsch and Frank have argued for policy control of status competitions for positional goods. Hirsch summarizes the problems of controlling distribution as an “adding-up problem” [sic] – where a group of individuals fail to pursue a common goal as their common goal (e.g. defence of public goods or safety) isn’t broken down into individual responsibilities (“when everyone stands on tiptoes, no one sees better”)[7]. in the developing world has hardly followed the route of Georgian England. The industrial class in the countries is indeed small in size and poor in absolute terms. The problems of extreme poverty have remained largely unresolved in large swathes of Asia and Africa.

It would be inaccurate though to draw wide conclusions based on economic poverty. Modern poverty is of a different nature than that of the Georgian England. The administrative successes and stabilities of post-colonial governments in Africa and Asia are varied and have depended largely on the extent of agrarian empires that had existed before. The extractive administrative frameworks of Ottoman or Moghul empires, for example, could be adapted well by European colonists in Asian countries when compared to the administrative units (as much as the political boundaries themselves) created in sub-Saharan Africa.

What we see more often in Asia and Africa are the effects of decolonization – a process that encompasses the loosely similar post-war political voices in Asia and Africa aspiring to establish nation-states. While centralization had been attempted for decades in both Asia and Africa (curtailing local-level status competitions and individual freedoms alike), their limited reach and success has prevented the institutional expression of status competitions. As the barriers are broken since the fall of the Berlin wall, the competitions that may have otherwise been limited to tribal or local levels have just started expanding to urban settlements[1]. The study of urban vs rural communities in Africa (see section 4.1↓) – esp in the context of Base of the Pyramid (BoP) initiatives is of particular interest.

Let’s look into a little bit of history for a few sub-Saharan African countries. Starting with Nigeria, we see some effects of the missionary education in the country- where regional disparities exist in current education levels between the North and the South. North has had a higher Islamic influence and the uniformity desired by the post-colonial government had initial challenges.

Having been a British colony, which have historically welcomed the participation of native authority, the market forces had been left relatively untouched in the country. Little was done to improve the conditions of the wage-driven peasantry – a trend that continued well into the post-colonial era. Then came African socialism and the power of merchant class also became limited. In more recent decades, when MNCs could have brought more power to a working middle class, their presence didn’t change the state of capital being controlled by a small minority either – an environment where only the state monopolized industries and an informal sector seem to have expanded[9].

While the BoP initiatives may not have created sufficient base for entrepreneurs, they have revived a focus on education and expanded the market for industrial goods. An average of 42% workforce in Nigeria have secondary education or higher. Upto 28% of those in mere survival activities have a secondary school certificate, and 12% have post-secondary qualifications [12]. Newly urbanized indigenous tribes and newly educated classes have taken up jobs that had earlier required a much lower level of education. The crowding hardly resolves the underlying problems with the economy – as the formal sector is in doldrums. It would be fair to say that the state of economy, rapid population rise and the resulting migration from rural areas has given rise to conditions where social scarcities may thrive[12].

Let’s go over to the next country – Tanzania – which was no less than an epicentre for the African socialism movements. In 1974, it even offered help to Mozambique in its liberation movements. Like in most of the post-colonial world, planned economy seemed the way forward under influence of Nyrere. However, once the political independence was achieved, the membership of nationalist parties declined and slowly on the side, the separation of civil service from political institutions became less important.

The reduction of private sector was not to experience much opposition under Nyrere’s leadership. Thus with a lack of support from workers and a ban of producer-consumer societies, a few inconsistencies appeared in the socialist model. The import subsidies seem to have underdeveloped the industrial sector – the approach of import substituting industrialization (ISI) leading to an oversubsidizing. A rent-seeking bureaucracy allowed the oversubsidizing to spread across other sectors – letting capacity utilization fall for the industrial sector[14].

Only public officials seem to have had the advantage in becoming entrepreneurs – and the problems around corruption have always posed limitations to trade reforms in the country (particularly in the energy sector)[14]. With a state regulated economy having had no ways to expand, the growth of parallel economy has been inevitable.

Electricity is only available to 10% of the population (~10% of their household income of users of electricity in rural countries is spent on its bill). The use of internet communications is higher in Tanzania than in Africa’s average but access to finance is low (albeit rising) for the private sector. Quality of life differs significantly between urban and rural regions and the size of the informal sector (60%) is significant[2]. It should’t be surprising that life-style differences exist in the country.

Going over to Angola, next – the country achieved independence from Portugal in 1975, after which the competition between different movements that were vying to lead the country descended into civil war. The Popular Movement for the Liberation of Angola (MPLA), a Marxist-oriented group that included urban intellectuals, nominally led the country[21]. Similar to the other post-colonial developments, state-controlled companies were to thrive. Sonagol, the state oil company, seems to play a quasi-fiscal role according to economists from the Western economies. The economy’s dependence on oil revenues also makes economic diversification difficult[21]. Business with China is booming and the urban changes have finally arrived in Angola.

In Kenya, the political conditions have appeared to be an equilibrium of multiple ethnicities – clan dynamics playing a big role in social spheres. When resettlement was attempted under Kenyatta’s leadership, the non-Kikuyu population was quick to express their discontent. Other attempts at nationalization – taking control of foods sales and establishment of purchase centres – met with similar disappointments. The institutional problems persist. The prevalence of small-scale independent works and lack of support offered to them has not been addressed by the governments and private sector[15].

In summary, the outreach and resources of the governmental institutions in the developing world are limited and a large industrial sector at the scale of China has been out of reach for most African countries[17]. The small-scale private enterprise – which forms the majority of non-agrarian workforce in sub-Saharan Africa – receives little governmental assistance. The expansion of informal economy has continued and the migration to urban areas multiplied. Urban migration is often seen as a necessary phase of urbanization which is followed by the competition between industrial and agricultural sectors for labour and food [6]. The side-effect of this development we’re concerned with is the complex interaction between tribal identities and economic developments.

With the investments pouring in from South Africa and Western countries, education has become part of a healthy competition in the African countries. At local levels, as the indigenes displace the non-indigenes and the newly educated displace the less numerous previous workforce, the increased trade is expected to homogenize the varied identities.  

Urban migration is difficult to control in African countries. From the point of view of the rural migrant, a flight to urban areas is often an escape from despondent circumstances as well as an opportunity of improbable social mobilities. In the developing economies, cities provide a range of products that are entirely absent from the agrarian rural settings.  Overcrowding and massive informal sectors in the urban indicate possibly irrational obsessions with industrial goods and a lifestyle with global appeal.

Given these conditions, more specifically the failed project of homogenisation of the recent past and a sudden rise in the services sector , an obsession with industrial goods is likely to develop in the African countries.

Before we look into the demand for status goods (iPhones and such), let’s think a bit about what we mean by a status or signalling product. If only the rich could afford electricity in a society, wouldn’t electricity also be a signaling product (does it really have to be an iPhone or a watch). One way the economists define a status good is the Veblen good – whose demand would go up even though the prices go up (since the good is even more appealing to its consumers when it’s pricier). In the same spirit, Hirsch considers “overuse” as a criterion for phasing out of a Veblen good – whose signaling qualities decline when individuals use a commodity too much. These definitions would imply that electricity is not a signaling product. It’s price would clearly decline if there is just more of it and it’s certainly not getting less attractive with overuse (although it would be less important for status if there is more of it).

Another reason why an economists would not consider electricity a status good is that it is supposed to be actually good for society. Electricity – in Hirschian terminology – is a direct physical scarcity and it follows that the rich spending more on electricity wuld potentially fund employment and other opportunities (including expansion of power plants) – by putting the invisible hand to work and trinkets can be turned into bread[16]. 

In my opinion, the conditions in the developing countries is not that different from that of Georgian England – if one considers the relative poverty. Despite the disparate political climates, it is difficult to deny that more resources (through income and assets) are more important for status than are any industrial goods. The role of income differences should thus be considered in any study of status-related consumption. This is a point that is missed in much of the literature on conspicuous consumption that has been extended on to developing countries.

References

[1] , Indigenous People in Africa. Contestations, Empowerment and Group Rights. Africa Institute of South Africa, .

[2] , “Tanzania Country Brief”, World Bank, vol. , no. , pp. , 2009.

[3] Adam Smith, The Theory of Moral Sentiments. , 1759.

[4] Amartya Sen, “Poor, Relative speaking”, Oxford Economic Papers, vol. 35, no. 2, pp. 153-169, 1983.

[5] Catherine Dolan AND Kate Ro, “Capital’s New Frontier: From “Unusable” Economies to Bottom-of-the-Pyramid Markets in Africa”, African Studies Review, vol. 56, no. 3, pp. 123-146, 2013.

[6] Douglas Gollin, “The Lewis Model: A 60-Year Retrospective”, Journal of Economic Perspectives, vol. 28, no. 3, pp. 71-88, 2014.

[7] Fred Hirsch, Social Limits to Growth. Routledge and Kegan Paul Ltd, 1977.

[8] Gary S Fields, “A Welfare Economic Analysis of Labor Market Policies in the Harris-Todaro Model”, Journal of Developmental Economics, vol. 76, no. 1, pp. 127-146, 2005.

[9] Gavin Williams, Nigeria: Economy And Society. , 1976.

[10] John Kenneth Galbraith, The Affluent Society. , 1958.

[11] Joseph Schumpeter, Capitalism, Socialism and Democracy. , 1942.

[12] Kate Meagher, “Leaving no one behind?: Informal economies, economic inclusion and Islamic extremism in Nigera”, Journal of International Development, vol. 27, no. , pp. 835-855, 2015.

[13] M N Srinivas, “A Note on Sanskritization and Westernization”, The Far Eastern Quarterly, vol. 15, no. 4, pp. 481-496, 1956.

[14] Michael F. Lofchie, The Political Economy of Tanzania: Decline and Recovery. University of Pennsylvania Press, 2014.

[15] Michael G Schatzberg, The Political Economy of Kenya. Praeger, 1987.

[16] Philip Henry Wicksteed, The common sense of political economy. Routledge, .

[17] Pranab Bardhan, Awakening Giants, Feet of Clay: Assessing the Economic Rise of China and India. Princeton University Press, 2012.

[18] Richard A Schroeder, Africa after Apartheid : South Africa, Race, and Nation in Tanzania. Indiana University Press, 2012.

[19] Robert H Frank, Choosing the Right Pond. OUP USA, 1993.

[20] S J Prais AND H S Houthakker, The analysis of family budgets. Cambridge University Press, 1955.

[21] Stephanie Hanson, “Angola’s Political and Economic Development”, Council of Foreign Relations, vol. , no. , pp. , 2008.

Posted in economics

Chinese century?

China as an imminent superpower has been a popular theme in the media. Even as a latent reader of political news, one cannot but wonder at the marvels of the Chinese century. A united, powerful and influential China, after all, would be a country that China has been for most of its history. From compilations of first encyclopaedias, standardization of written language or overseas voyages overseas – China is known the awe and enchantment it evokes.

Yet somehow I am not yet a believer of a Chinese century just yet. China is a country increasingly prosperous but it also is one where corruption is rife. When the rest of the world seems to be shunning nationalistic controls and embracing transparency, China – if the external media is to be believed – seems wound up in its protectionist policies. A Chinese century would in my opinion require CCP to get rid of its own reluctance in becoming a global power.

I focus upon transparency in financial markets only because it is a topic that has concerned me most of my working life. I don’t view transparency as some sort of rite of passage for a country to become part of the developed club – but only as a tool for efficient administration.

Most Asian developing countries, like their protectionist controls themselves, trace their roots in anti-colonial movements. A fascination with Communism or socialist ideas can only help independent banking as much. Despite all reforms, if a country takes issues with publicising economic data, it just doesn’t seem productive.

There is no doubt however, that compared to its neighbours like India, Indonesia, Malaysia or Sri Lanka, China has adapted well to the Western development model. It is precisely because of this adaptability that CCP’s stance on transparency makes little sense to me. With the improvement in lifestyles, Chinese are more and more likely to demand rights which their Western counterparts enjoy. So even though the cultural difference and the near-colonial issues might be one too many, China too would end up embracing ideas of individualism which property rights and transparency stem from.

Transparency would be welcome by the investors very much. When tick-data history was released for the first time, the US market volatility improved significantly. Outside of finance, transparency makes market-research less costly, spurs entrepreneurship and lowers entry to market as well as legal costs.

But we seem far away from getting there. In current China, there seems no official way, for example, to find out how much capital is being transferred to China. What academics have long relied upon are projections of “under-invoicing” of imports. As the only official channel of transfer of foreign money to China is that of FDI, it leaves space for unofficial routes (dixia qianzhuang) and under-invoicing of exports – a trend that is so widespread that under-invoicing serves as a pretty significant indicator of capital transfers to China.

Despite all skepticism around “Communist” China, I remain excited about the choices which China is to make. There are after all a couple of things the Communist-inspired nationalism has offered China. I look into Shanghai crisis to illustrate the need for governmental controls in China. The crisis had developed when commodity prices collapsed due to external reasons and businesses found it difficult to pay back their loans. Without any central control at the time, the ownership was transferred quickly to banks and as there was no central bank had to control rates in those times, banks ended up getting more and more involved in real-estate. When house-prices soared and silver prices tumbled further, widespread suffering of farmers and small-industry owners left parts of country in disarray.

One needs to realize that the popularity of state-control in China could have its roots in such experiences. An obsession with central control is not a communist fad in China – but the response to something deeper. There are reasons to believe that Chinese institutions had already favoured a system of credit over equity (indeed the idea of equity might had been primarily a Western idea at the time). The industries in China during the 19th century often arose from family ties. What developed for profit-sharing was the a system of fixed dividends – guanli (fixed dividends) and hongli (excess fixed dividends). On one hand this prevented the management of the company from taking excessive risks but on the other, it made the company more susceptible to default and posed difficulties in raising capital.

Given that, response to the 2009 crisis had been different. PBOC was quick to loosen monetary policy; it announced a stimulus package to spur domestic demand and fix unemployment. The point is simple – nationalism did have something to offer China. But the modern problems express themselves in different ways.

One such problem is that of domestic debt. As an export dependent economy, the effect of any external crisis is always felt throughout the economy. In 2009, one may champion the policy which kept MBS-related losses less than $20 billion, but the losses due to declines in other assets e.g. treasuries had been much high. The collateral facilities were still meagre and a massive surge in LGFVs as well as shadow banking has done little to help the problem. China’s ability to reduce its risk associated with the piling debts is still limited.

The focus in the media on the cash that China spends abroad has taken interest away from the problems with debt the country faces. Even if GDP numbers from China were reliable, and were a sufficient indicator of a country’s growth, China faces severe financial problems related to debt. The worse the media can do is to continue attributing economic data to “cultural” reasons – a temptation that not even all economists cannot always avoid.

The voice of contrarians doesn’t often go unnoticed. Lars Christensen had blogged last year that China may never catch up with the US. Writing for The Diplomat, Larry Diamond predicted that that China would turn into a democracy within a generation’s time. There are many social-science predictors that extrapolate from Hong Kong protests that severity of the administration problems would continue to rise. Amidst all this, transparency would seem a low-hanging fruit.

So in the end, I am still very much a believer in China. I find Deng reforms still exemplary – that had brought fundamental changes in the Chinese economy. The response of Chinese community to these reforms was simply inspirational. One can hope that the centralized institutions would once address the deeper concerns with the economy. Until then, the issue of domestic debt continues to keep China’s economy susceptible to shocks and the rise of shadow institutions poses a problem similar to what pawnbrokers presented in the depression years. I remain hopeful because the contrasts between capitalism and communism make less sense than they ever have. In China’s context, the corporate practices of the SOEs only improved after the free-market intervention and the recommendation for a central bank (which became PBOC) came from American consultants hired after Shanghai crisis. The singled out view of China often reported in popular media is only to going to be appear more fictional as time passes.

Posted in Uncategorized

Banking on delta

In his book “Life of a Quant”, Emanuel Derman describes what Black-Scholes equations as some sort of black magic. That you don’t need the expectation of price of an underlying (mean) to price the associated derivative seemed unbelievable to the practitioners at the time. Black-Scholes is ubiquitous in finance now – trading job interviews (somewhat surprisingly) always require you to have a good understanding of how delta, gamma or vega evolve over time, volatility or strike.
How well the assumptions of constant-volatility and Brownian motion simplify the modeling of stock prices – continue to surprise many of us who have learned financial modeling. The widespread fame of Black-Scholes equation has indeed caused disasters, but that has probably to with fame than rigour.
Not to mention that the math for Black-Scholes equation – which is now used blindly for anything to do with optionality – was already there before Black. Sprenkle is known to have published the solution of the PDE in Yale Economic Essays in 1961 (as “Warrant Prices as indicator of expectation”). What Black and Scholes provided was an economic insight – that the expectations of prices ( \mu ) didn’t matter for pricing options. This CAPM flavoured fact ended up creating the market for options.
Self-Financing Portfolio
A key idea now taught in schools and finance programs is that of a self-financing tracking portfolio. The way Black-Scholes PDE is derived in most of literature is through a portfolio of a bond and a stock – a tracking-portfolio i.e. a combination that replicates the payoff of an option. The value of this portfolio V = αB + ΔS  (where α is the weight of bond and Δ the number of stocks) cannot be changed except by changing weights α, Δ. Further, by adjusting dV at every increment dS, we can get the differential equation that we know as the the Black-Scholes PDE:
dV = αdB + ΔdS = αrBdt + Δ(μSdt + σSdW) ⇒ (1) dV = (αrB + μΔS)dt + σΔSdW
By Ito’s lemma: dV = V/dt + V/dS + (1/2)2V/SdS2 = V/dt + V/S(μSdt + σSdW) + (1/2) 2V/Sσ2S2dt
(2)  dV = (V/t + μV/S + 1/2V/Sσ2S2)dt + σV/dW
Equating (1) and (2): α = Vt + VSSσ2S2rB and since rV = rαB + rΔS, we have the world-famous Black-Scholes PDE:
(3)rV = V/t + (1/2) 2V/Sσ2S2 + rS V/S
Notice that the idea here is only a no-arbitrage model – with delta of the option always in the spot-light.
The Hedge-Portfolio
Another way to think about Black-Scholes equation – is the risk-free growth of the hedge portfolio (no-arbitrage arguments guarantee that). If one considers the hedge-portfolio for a European call option as Π = V − ΔS. The differential change in the PNL of the hedge portfolio is :dΠ = dV − ΔS.
Using (2), we have :dΠ = dV − ΔdS = (V/t + μS(V/S − Δ) + 2V/Sσ2S2)dt + (σV/S − Δ)dW.
If we set local risk-lessness on every incremental change, we must have Δ = V/S and hence, dΠ = Πrdt ⇒ (V − V/S)r = V/t + 2V/Sσ2S2  (i.e. the Black-Scholes PDE). In other words, no-arbitrage arguments imply that the PNL of a locally delta-hedged portfolio must grow at the risk-free rate : dΠt = Πtrdt. It can also be numerically verified that a delta-neutral strategy (which sets Δ = V/S at every step dt) would result in growth of the hedging-portfolio as Πt = Π0e − rT  (notice that the distribution of \Pi_T – is not normal).
A point on hedge-portfolio volatility
We often assume that the volatility of the differential change is what we need to minimize in order to “hedge” the portfolio. In reality, however, you might be more concerned with the variance of the PNL of the hedge-portfolio. Simulation (or tracking real examples) shows that delta-neutral strategies have quite high volatilities.
Over the summer, working with a smart group of people at fintegral,  I experimented with ways to minimize the volatility of Π in the B-S economy (instead of minimizing volatility only of dΠ – which is trivial with the delta-neutral approach i.e. keeping the coefficient of dW as zero). Even when it works – full-fledged delta-neutral strategy, in my opinion, is an overkill for what we wish to use it for. Here is math to tell you what I mean by that.
For the volatility of Π, let’s look at the direct formulas. With delta-neutral strategy, payoff would look like:
Πt = Pt − N(d1)St = Ke − rTN(d2).
The volatility of Πt is therefore ought to be the same as that of N(d2) (this can also be numerically verified as well). To minimize the volatility of Πt, let’s assume that we deviate from the delta-neutral hedging in a way that we maintainΔt + ϵt = N(d1) + ϵstocks instead of Δt = P/S stocks. This would lead to the hedging portfolio value move as Πt = Pt − N(d1)St − ϵtSt = Πt − ϵtSt = Ke − rTN(d2) − ϵtSt.
Therefore we can test if we can achieve a minimal variance of Π by deviating from the delta-neutral strategy a little bit. The experiments that I completed – conclude that this deviation does seem to pay off. Sticking too much to the delta-netural strategy becomes particularly costly once you take the liquidity and transaction costs into account.
There are a lot of factors that would determine how much you should deviate but there are strong benefits from deviating. As everything else in financial modeling, it then boils down to the idea of position-risk – which is an ubiquitously vague idea that is still tied to expectations for the future.
Posted in Uncategorized

Alpha, Beta(s)

Alpha vs Beta

An interesting visual presented by Dr Ralph Koijen (http://www.koijen.net/) in his talks is that of the timeline of alpha. Before 1970s, alpha takes a wide spot in the band of return. Every return, in those days, was seen as alpha. Once CAPM was proposed, the idea of market-beta caught up – alpha shrank – giving some of its spotlight to Market-risk-premium. Much later, when SML and HMB were proposed by Fama-French, it would seem that the flood gates were opened to smart-betas. Betas of all kinds popped up and alpha’s influence and visibility was significantly compromised. Indeed, what used to be alpha until recently is now SML, HMB and a whole set of other factors.

Are we chasing a moving target then? Well as a quant developer, I’d say that sounds familiar. But there is more to it than that. I think that the increase in commonality across funds and the flurry of technology assisted index-funds have popularized betas. Of course, these funds keeping transaction costs low for the investors and make generating alpha a bit more difficult for the active managers. The second and probably the major reason for lack of clarity in alpha’s definition is really the reliance of its definition on a certain benchmark.

If you choose a specific subset of market-performers, your alpha would be different from when you choose the whole market. Without considering the regression, it’s difficult to talk about the “intercept” of the regression i.e. the alpha.

In the heydays of CAPM, people hailed market-risk-premium ( a bit too much – in hindsight ) – claiming that taking risks is the way to get higher returns. This wasn’t a causal argument, in my opinion – since risk itself doesn’t generate any return (semantically speaking). A group of companies doing something silly, after all, would not necessarily generate higher returns. What CAPM meant was the higher-return volatility would be associated with higher return – and you cannot beat the market if you don’t invest in the outliers of the “regression”. The confusion over regression has therefore caused the confusion over alpha.

Alpha and the Fundamentals

Generally speaking, what is now alpha in the industry is the fund-managers ability to find “quality” in stocks. When you want to generate alpha, population statistics i.e. beta (or “loading-factors” ) cannot completely judge a company. The fundamental investors believe that fundamentals  offer a better explanation of the return than historical time-series. In that sense, generating alpha is all about beating the benchmark. The active manager’s enemy could then be the “generality” of a factor. If a factor of a company is common across similar companies, the payoff from a manager’s selection is going to be limited as this commonality would be picked up as a trend and become part of a certain smart-beta.

What fundamental investors often look at is the FCFF or P/E valuation model (more often than not these are just guidelines). It is still worthwhile, I believe, to see how FCFF fluctuation relate to fluctuations in market prices. Even if the P/E or FCFF models are not accurate, one would expect market prices to adjust to fundamentals – because the investors are driven by their belief in their models (the truth of models probably doesn’t matter).

If we view the timeseries of GOOG since 2007 vs the FCFF/per-share. goog_y_fcff.

fcffprice

 

There does seem to a clear correspondence. The post-crisis FCFF/per share trend has corresponded to the  overall rise in price. The overpricing of 2013 is quite visible as well. A more thorough analysis is needed for the less glamorous stocks. But that is probably better saved for the next post.

Posted in Uncategorized

GBP/USD and Gilts

Five years since the 2009 crisis, the debate on policy controls is still. Whether rates would be increased or cut are speculations that have not yet stopped because of uncertainties accentuated with the crisis. As  a quant developer working away on my computer, I had only felt a distant fear of deflation. Now that the stock market had performed well last year, the media has talked about restoration of “faith” in markets while the skeptics continue to predict another meltdown.

Market-perception is an idea so vague and of such ubiquity that one could attribute everything to the phenomenon. But as a quant developer (more developer than quant really), I submit myself to the obsession of  seeking observable parameters that influence certain variables. The immediate  need for me had been to transfer some of my money (not a lot) to UK from US where I used to work before. Being a student has raised my vigilance towards transfer fees to a level which my wife’s generosities haven’t been able to curtail.  So I set myself to understand the dynamics of GBP/USD to see whether I could wait for  it to lower in a month before I transfer my money.

The Demand and Supply Arguments

As someone who didn’t major in economics, the whole subject seems a lot demand and supply. The first clue for me therefore, is to see what affects the demand of GBP/USD. The return on the currency bonds should provide a lower bound on the return in the local market. So I suspect bond prices to be a reasonable indicator of GBP/USD rates. An increased demand for GBP (whilst USD demand stays constant) should drive GBP/USD up i.e. if bond-prices fall (GBP yields increase) GBP demand should increase (it offers more yield) and so should GBP/USD (again, while USD demand is constant). Of course, we don’t live in a free-market worlds, so policy response seems a factor. But do we really need to care about simulating political situations for me to transfer my money – seems unlikely.

A first cut analysis here, looks at just GBPUSD, 5-yr Gilt and Interest-rates. I find that UK and US economies are largely correlated and expect the yields for USD and GBP to rise together. At the time this post is written, we’ve been expecting a raise in UK interest rates. Tapering is on and we think that Mark Carney might increase the bank-rates any moment (although the official communication is that of next spring). The UK economic data already seems to be creating that pressure. Before one make any bets on GBPUSD yet, I feel it is worthwhile to explore how rates changes have historically affected bond yields and GPB/USD rates. I download data from bankofengland website attached here: rates and run a basic VECM (eviews results : gbpusd).

graph

About Cointegration

A couple of observations on the relationship between gilts and interest-rates:

1. In short-term, rates are more responsive to gilts (than the other way).
2. Effect of rates on gilts seems significant in lag 2 (in the short-term) and negative. In other words, when rates are increased (decreased) the gilt yields decrease (increase) in the short-run (more demand leads to higher price –  which confirms the intuition).
3. The cointegration effect is more significant on rates rather than gilts – an expected behaviour as well. Policy response is a corrective one.
4. Policy response has been less frequent since 1987. Policy makers seem reluctant to increase or decrease rates (possibly because it matters less when bond yields are already so low).

5. Cointegration is significant for both GPBUSD and rates. Here is the plot of the 12-period MA plot of the error term and the GBPUSD change. The difference seems to be cointegration as is evident from the eviews result. Also evident is the underpricing of GBP/USD around Apr 2009. One would argue that GBP/USD is finally priced well right now (according to its relationship with gilts alone – not considering USD data). If gilts don’t change, then I would argue that the cointegration would keep the GBP/USD value stable.

maplot

Another way to describe this cointegration is to talk about long periods of stable rates followed by playing catch up with gilts. Looking at 1987 crash for example, interest rates had been stable until Feb 1987, when bond yields came down by more a whole percentage in two months. Rates were cut immediately from 10.875 to 9.875 and every month until May 1987, after which yields picked up and rates were increased in August. Next year, policy makers seem to seen a decline not reflected in bond yields and cut the rates in Apr 1988 – only to raise it by 4% by the end of the year. Rates remained high until Sep 90, when stability seemed to have arrived. By 1992, rates were cut by 2% while bond yields kept declining. When in 90s, bond prices went up again, rates were increased in 1995. In 1997, bond yields rose and interest rates were increased. It was only in Sep 1998 when rates were cut again, and continued being lowered until Jun 1999. Bond yields stabilized in year 2000 and rate was held constant. Since then rates have been cut every time bond rates plunge and they’ve been plunging a lot. In 2007, rates were increased soon before the financial crisis occurred and the bond yields became extremely low (demand shifted to fixed-income). They momentarily rose but haven’t achieved a stable pre-2009 level as yet. Long story short, the cointegration effect between is significant and we can be sure that high bond yields would cause a policy response long periods of irresponsible policy imply that gilts don’t sufficiently reflect all factors responsible.

The Phillips curve

Other than the USD data, there are two things that one must consider in this analysis. First is the slope of the yield curve and second, the pair of inflation and employment rate. I download the enhanced file with employment data (from ONS) here. Running VECM again yields the following result:
1. The cointegration effect is strongest on rates (and on employment) i.e. rates and employment do move together – even in short term rates seem to have direct impact on employment. The second factor that has an effect on employment is RPI – more inflation means more unemployment (RPI has a negative impact on unemployment).
2. Quite unsurprisingly, the rates are affected most by the cointegration factor (as do glts).

3. Surprisingly, rates don’t respond to employment in the short-run. This could obviously be a post-1998 view. I would think that employment declines should result in immediate policy response but since our data is post-1998, the only policy response in the current scope has been a constant interest rate i.e. employment seems to have recovered but interest rates have remained low. The VECM output with lag 2 –  point out that GLT responds to rate rather quickly (first lag) but the rate response to RPI is 2 lags late. Employment changes don’t seems to have short-term influence on much of the variables.

4. Employment seems to be affected a lot by the gilts (bond markets affect employment). It would seem that employment affects rates through gilts rather than directly.

plots

On the other side of the ocean

If we add the GBP/USD data, and look at the VAR output, it seems that neither RPI nor employment-rate changes have a short-term impact on any other factors. The conclusion one can arrive is quite simple – that RPI is observed at a much longer period of time while financial indicators have more statistically significant relationship among themselves.  In other words, we’re better off using gilt and interest-rate data to estimate value of GBP/USD rather than RPI. It seems that gilts are more cointegrated to GBP/USD than the macroeconomic variables.

If we look at the overall VECM output with GBP/USD, we see that cointegration isn’t really that significant for GBP/USD. In short-term, none of RPI, EMP,RATE or GLT affect GBP/USD significantly.

Exploring GBP/USD with respect to gilt and interest-rates alone, we now seek data from US Treasuries. Looking at the cointegration (output from VECM) we note that:

1. The cointegration is not so strong for GBPUSD with respect to UK and US yields. UK yields themselves seem to have a significant cointegration affect (i.e. UK yields fall back or rise up as GBPUSD moves) .
2. The GBP/USD response to UK and US yields is still strong in short-term. This short-term effect is more significant than the cointegration itself. As one would expect, the coefficient of UK yields is positive (when UK yields increase GPBUSD becomes more expensive) and that of US yields is negative (when US bonds offer higher yield GBPUSD becomes cheaper).
3. The response of UK to US yields is quite strong and significant. The effect of UK on US is relatively less strong and less significant (Size does matter after all).

The bottom-line is that the UK yields seem to be strongest factor that affect GBP/USD rates in the short-term. As the bond yields rise on both sides of the continent, I don’t think GBP/USD would be significantly lowered unless there is a significant change in the bond-market of either of the countries ( a change that rate-increase is certain to cause ). But we can be sure that we have enough information from the bond market to predict the GBPUSD rates closely enough.

Update May 14, 2014: The bond rates continue to fall. Although historically, falling bond-rates have meant a rate decrease (and hence lower GBPUSD ) but rate-cut is not a possibility at the moment since rates are already set to zero. Falling gilts themselves have a short-term increasing effect on GBPUSD – which touched a five-year high recently (1.692). If the gilts continue to fall, I expect to see further increase in GBPUSD. Policy intervention was feared once again due to increasing house-prices – if rates are increased, the immediate  GBPUSD would rise since – 1. it is already rising (lag effects significant) 2. gilts would decline. Including cointegration, the equation turns out to be the following:
Z_{t} = GLT_{t-1} - .76*RATE_{t-1}+5.7*GBPUSD_{t-1}-11 = -1.2
\Delta(GBPUSD_t) =-.006* \Delta(Z_t) -.01* \Delta(GLT)_{t-2} + .01 * \Delta(RATE)_{t-2} + .01 * \Delta(GBPUSD)_{t-1}.

The expected cointegration-factor in these conditions is -.007. Rates being the same, if GLTs decline further GBPUSD would increase. My prediction that GBPUSD should come back down relies on cointegration factor weight being significantly high at the moment (-1.2) .

Posted in Uncategorized
Archives