This is going to be a long, and somewhat complicated post. I’m going to be looking, very very briefly, at some of the more difficult topics in Economics, tying them to “Climate Science”, and then taking a look at the current state of Bond and Stock markets, then asking a pointed question about how markets crash. I’ll try to keep it as brief and attainable as possible. (That is, this is to be a ‘popular piece’ to the extent I’m able.)
Economists & Economic Theory History
I can’t imagine a heading more likely to cause folks to run away screaming. Economics is named “The Dismal Science” for more than one reason. Then “History” is often worse. Now blend in “Theory” and looking at the “History of Economic Theory” has got to sound like having your teeth pulled while being lobotomized without anesthesia. But please, give it a chance. I hope to make it clear and at least a little interesting.
The basic point is just that Economics is a Social Science in that it studies human behaviour. People are unpredictable, and sometimes irrational, so predicting them is extremely hard. When will the next dictator arise? How long until Venezuela collapses? Will the USA continue to embrace the Progressive Socialist path into a Grand Venezuela, or will Trump “turn it around”? It simply is not possible to know or predict. It is Uncertain.
This is different from a statistical distribution. We can’t be “20% New Dictator” in some country. Yet for some of it, you can have a distribution. You can get “20% Progressive Agenda”. How do you sort out the uncertainty from the probabilities, and how do you predict / project / model them? That kind of stuff has been at the core of Economic Theory for about 100 years (or perhaps even more, depending on how you look at some early Economic Theory … the problem was certainly there from the start.)
So Economics and Economists have a special perspective on uncertainty, having it thrust in their face for all of their academic training and subsequent study. When we see what is done in “Climate Science”, it just shouts to us a bit “Oh No!!! We tried that and it is WRONG! You are missing the Uncertainty Problem!!!”
From lousy error bands, to linear regression of cyclical things to linear regression of chaotic things to linear regression of unknown things to assumptions of stability in things unknown, or worse, known to be unstable; and more. It all just screams at us “We’ve been there, run away NOW! as here their be Uncertainty Dragons!!!”
Over at Judith Curry’s place, I found an excellent article on just this topic (while searching for more of Javier’s stuff… It’s generally a great place to go fishing ;-)
I don’t know as I’d call it “non-orthodox” economics, being as Keynes is involved, but hey, “New Math” is from a few hundred years ago too…
So I’m going to quote chunks of it, and add some commentary, but folks really ought to read the whole thing. It has much clue in it, just waiting for folks to pick up the gems.
The Uncertainty Monster: Lessons From Non-Orthodox Economics
Posted on July 5, 2017 | 129 Comments
by Vincent Randall
A perspective on economists’ grappling with the ‘uncertainty monster.’
In this essay I am going to try to introduce non-economists who work in fields where they are first coming into contact with the ‘uncertainty monster’ – as Judith Curry calls it – to what some economists have learned from their encounter with it. First I will try to explain why economists encountered the monster before others working in different disciplines. Then I will try to give the reader an overview of what different economists have said about it. Then finally I will briefly consider the differences and similarities between how economists are confronted with the uncertainty monster and how those working in ‘harder’ sciences, like climate science, are confronted with the uncertainty monster. There are definite differences and definite similarities.
I like that name, “The Uncertainty Monster”. It certainly is a monster. This is what Rumsfeld called “the unknown unknowns”, mostly, but to some extent also the “known unknowns”. Things like “What causes the Thermohaline Circulation to halt?”. Some folks claim it is a flood of fresh water into the northern latitudes. Perhaps for some of the halts, but not all. We know that the heat backs up in Florida and Europe freezes and it happens many many times. We don’t really know why. it is a hypothesis that glacial melt causes it. At other times there is no such event. In large part it looks like a chaotic and unpredictable switch of the Gulf Stream. Yet there are periodicities in it that may point to a tidal trigger and lunar 1800 year position changes. It is highly “Uncertain”. When, why, how. All major uncertainties. Is it happening now? The THC has slowed some; will that continue? It isn’t a probability distribution, it either will, or it won’t, and we have no basis for saying which or how much. All we have is speculation in the face of uncertainty.
In economics, and the history of it, you learn mostly what grave errors were often made by other economists in creating their theories. You learn to be very very skeptical that any of it is right, and more skeptical that what is right is precise or persistent. (I.e “things might change”… because people change…)
So yes, The Uncertainty Monster has plagued Economics and Economists for decades, maybe even centuries. We’re uniquely steeped in it. So when we see it blithely ignored by “Climate Scientists” and swept under the rug, well, let’s just say we get that creepy spider skin crawling feeling… and we want to shout “Beware Of Arrogance!”.
Keynes concludes that this means that a lot of economic activity is determined not by calculation of probabilities or anything like it. Rather it is determined by the state of confidence.
It would be foolish, in forming our expectations, to attach great weight to matters which are very uncertain. It is reasonable, therefore, to be guided to a considerable degree by the facts about which we feel somewhat confident, even though they may be less decisively relevant to the issue than other facts about which our knowledge is vague and scanty. For this reason the facts of the existing situation enter, in a sense disproportionately, into the formation of our long-term expectations; our usual practice being to take the existing situation and to project it into the future, modified only to the extent that we have more or less definite reasons for expecting a change. The state of long-term expectation, upon which our decisions are based, does not solely depend, therefore, on the most probable forecast we can make. It also depends on the confidence with which we make this forecast — on how highly we rate the likelihood of our best forecast turning out quite wrong. If we expect large changes but are very uncertain as to what precise form these changes will take, then our confidence will be weak.
You could look at “state of confidence” as “how wide are the error bands?”, but it is a bit more than that. Error bands show up when you have decent statistics. Confidence is when you are doing that, but also ‘guessing’ about the future. I make a guess that I’ll be alive in 10 years when I build a factory or decide to buy a new home in a new State, but there are no error bands known to me. The entire insurance industry is built around finding those probabilities on a herd basis, but that can tell me nothing about the only thing that matters to my decision; my actual future. It is a guess about an error band…
Yet that drives most economic activity in the long term. Over a year or two out, we are all guessing. Will North Korea nuke Hawaii? Will ISIL get their Caliphate established? Guesses all. Yet we must decide to build a factory, vacation in Hilo, or enter long term oil contracts with Iraq.
A dummies guide to uncertainty in economics
First up is Keynes himself. We have already seen how Keynes introduced the concept into economic theory. But he also did some work on the implications uncertainty had for econometric modelling – that is, the use of mathematical and statistical models to try to predict future economic outcomes. Keynes addressed this in his paper ‘Professor Tinbergen’s Method’, written in 1939. The ‘Tinbergen’ in question was Jan Tinbergen, a Dutch economist who pioneered multiple linear regression modelling. Keynes had actually written an entire book on probability and statistics where he advanced a theory of probability that integrated uncertainty. This is too complex to look at now but interested people should get their hands on a copy of ‘Treatise on Probability’.
Keynes lays out some of the issues with statistical modelling in his Tinbergen paper. For example, he makes clear that…
Put broadly, the most important condition is that the environment in all relevant respects, other than the fluctuations in those factors of which we take particular account, should be uniform and homogeneous over a period of time.
Now most people will be taught in statistics class that the coefficients in a multiple linear regression can only be taken at face value if we assume that the statistical model is complete. That is, that all relevant variables have been included in the model. But as most people know, in practice most people do not follow this rule. But they should and the fact that they do not probably means that we should take what they say with more than a pinch of salt.
So first off notice that this was a ‘hot topic’ in Economics back in 1939. We’ve been at this a while…
Next, that key line ”
the most important condition is that the environment in all relevant respects, other than the fluctuations in those factors of which we take particular account, should be uniform and homogeneous over a period of time.
Yet we know the climate environment is anything but uniform and homogeneous over time. It is chaotic (in the mathematical sense), changes in step functions for no known reasons, has modes of oscillation of scales from months to years to decades to centuries, and has a long history of dramatic and unexplained changes. From the Younger Dryas, to a few dozens (hundreds?) of stadials, interstadials, glaciations, interglacials, D.O. events, Bond Events, Heinrich Events and more.
Stadials and interstadials are phases dividing the Quaternary period, i.e., the last 2.6 million years. Stadials are colder periods and interstadials are warmer. Each phase has a Marine Isotope Stage (MIS) number, working backwards from the present, with stadial having even numbers and interstadials odd numbers. Thus the current Holocene is MIS1 and the Last glacial period is MIS2. Stages are divided into warmer and colder intervals. MIS 5e (the Eemian), the hottest of the last million years, was the oldest interstadial of MIS5, with MIS3 and MIS1 being interstadials and MIS2 and MIS4 being colder stadials. In glacials a and c are stadials and b and d are warmer interstadials. Thus MIS 6a, 6c and 6e are stadials and 6b and 6d are interstadials.
Generally, stadials endure for a thousand years or less, interstadials for less than ten thousand years, interglacials for more than ten thousand and glacials for about one hundred thousand. The Bølling Oscillation and the Allerød Oscillation, where they are not clearly distinguished in the stratigraphy, are taken together to form the Bølling/Allerød interstadial, and dated from about 14,700 to 12,700 years before the present.
Greenland ice cores show 24 interstadials during the one hundred thousand years of the Wisconsin glaciation. Referred to as the Dansgaard-Oeschger events, they have been extensively studied, and in their northern European contexts are sometimes named after towns, such as the Brorup, the Odderade, the Oerel, the Glinde, the Hengelo, the Denekamp, etc.
It just shouts at us to ask “How can that be called uniform and homogeneous over a period of time? Eh?”
At that moment, the entire edifice of “Climate Science” looks like so much hokum and bunk to anyone who has had to spend a few years wrestling with The Uncertainty Monster and wrongly done linear regressions.
IMHO, that is the sort of thing that slaps many Economists across the face and is the reason why you find many of them participating on the Skeptical side of the Climate Wars.
Another problem that Keynes highlights in the paper is as follows:
For, owing to the wide margin of error, only those factors which have in fact shown wide fluctuations come into the picture in a reliable way. If a factor, the fluctuations of which are potentially important, has in fact varied very little, there may be no clue to what its influence would be if it were to change more sharply. There is a passage in which Prof. Tinbergen points out (p. 65), after arriving at a very small regression coefficient for the rate of interest as an influence on investment, that this may be explained by the fact that during the period in question the rate of interest varied very little.
Keynes’ criticism is as fresh today as it was in 1939. Because we have no access to repeatable controlled experiments the model is limited by the actual variability in the historical data. The relationship between one variable and another variable may not be linear. The coefficient may rise massively past a certain point. The example of the interest rate is a good one. If the interest rate only move within the bounds of one or two percentage points in a sample its impact on investment will probably be minimal or non-existent. A regression would tell us this. But if the interest rate was then raised in an unprecedented way – say, by 15% — then the impact on investment could be enormous. This actually happened in 1979-1980 when the interest rate was raised from around 10% to just over 17%. Investment crashed and the economy went into recession.
CO2 anyone? Though with CO2 we do have something of a prior natural experiment. It has been far far higher in the geological past and there was no thermal runaway. Often it could get fairly cold with high CO2 levels. For most of the history of planet, CO2 levels were far higher, yet here we are. That, alone, ought to be enough to falsify the CO2 hypothesis.
But the more important point is just this: We do not know all the variables and how they interact. We can not say that something that was “always a constant” didn’t, in fact, have a mode of change we didn’t know about.
Recently, we got proof of that as a real issue. The Sun went very very quiet. Total Solar Irradiation (TSI) didn’t change much at all, but the spectrum shifted. Far far less Extreme UV and UV, a lot more red and infra-red. Then the unexpected happened: The atmosphere got shorter. Same mass, just not puffed up as much. This causes all sorts of collateral changes. We moved from a Zonal Flow to a Meridional Flow jet stream. Cloud number, size, and distribution shifted. Storm tracks moved. We’ve got much increased flooding all over the planet. Precisely timed to the solar shift (yet Climate Junkies claim it must be CO2, despite no coincident change of CO2 and no such issues in the prior 30 years of CO2 Panic Mongering). So this is an existence proof of a variable we didn’t know about, with major impacts and nonlinear interactions. Just the kind of thing Keynes was talking about.
Currently our temperature data, globally, is far far too sparse, variable in quality, full of dropouts (holes in the data) and too manipulated in a variety of ways to be usable for much. A huge effort has gone into creating statistical manipulations to try to overcome the demands of Nyquist and fabricate meaning out of nothingness. Yet that is based on a completely broken set of assumptions and even has the physics wrong. You can NOT average temperatures in different places or with different air masses and derive any meaning as a temperature or heat content from the result. It is an intensive property.
That means that even our notion of what the present “Global Average Temperature” is, or what it ever was in the past, is an unknown and unknowable thing. It isn’t a statistical probability, it can not be made more accurate by averaging. You can reduce error by averaging measurements of the same thing but only for random errors. You can not reduce error of systematic errors by averaging nor can you reduce errors in the actual temperature by averaging temperatures from different places or times (as the air mass changes physical state – things like dew point).
Yet exactly that is used as the foundation stone for ALL of the panic over “Global Warming”. A simple statistical fraud, at best, horridly failed understanding of physics at worst. (Yes, I consider it a worse failing to not know enough physics and statistics to get them right; since anyone can be paid enough to commit fraud and that is only a failure of ethics, not ability.)
But now look at it in the context of The Uncertainty Monster: It is all about attempting to erase uncertainty by false methods. You can NOT erase the uncertainty in cloud formation and extent via averaging thermometers. You can NOT erase the uncertainty in the error bands of thermometer readings from 1950 via averaging them together (intensive property, after all, and systemic error issues). It just screams out for rejection as it is claiming to have slayed the Uncertainty Monster via methods known to be wrong and failed.
Do note, in the example above, the reference to interest rates making an unexpected excursion from prior experience. We’ll come back to that in modern terms below. Then it was a spike high. Right now, we are beyond historically low. Both “Uncertainty” flags. So how to decide in that context, eh? Greenspan was on CNBC today “admiring the problem”.
Returning to the Uncertainty page. This is a long quote, but important to keep intact:
The next economist to deal extensively with uncertainty was GLS Shackle. Shackle tried to further integrate uncertainty into economic theory in books like Epistemics and Economics: A Critique of Economic Doctrine. That may not be of too much interest to non-economists but he also made some interesting points about uncertainty more generally. He was especially interested in the issue of decision-making under uncertainty – which he understood to be entirely different to decision-making in the face of a probabilistic or ‘risky’ future. He thought that decisions in the face of uncertainty were unique as they are often required but there is no definite way to approach them. From his book Epistemics and Economics: A Critique of Economic Doctrine:
To be uncertain is to entertain many rival hypotheses. The hypotheses are rivals of each other in the sense that they all refer to the same question, and that only one of them can prove true in the event. Will it, then, make sense to average these suggested mutually exclusive outcomes? There is something to be said for it. If the voices are extremely discordant, to listen to the extreme at one end of the range or the other will have most of the voices urging, in some sort of unison, a turn in the other direction. ‘The golden mean’ has been a precept from antiquity, and in this situation it will ensure that, since the mass of hypotheses will still be in disagreement with the answer which is thus chosen, they shall be divided amongst themselves and pulling in opposite directions. Moreover, the average can be a weighed one, if appropriate weights can be discovered. But what is to be their source? We have argued that statistical probabilities are knowledge. They are, however, knowledge in regard to the wrong sort of question, when our need it for weights to assign for rival answers. If we have knowledge, we are not uncertain, we need not and cannot entertain mutually rival hypotheses. The various hypotheses or contingencies to which frequency-ratios are assigned by statistical observation are not rivals. On the contrary, they are members of a team. All of them are true, each in a certain proportion of cases with which, all taken together as a whole, the frequency-distribution is concerned. Rival answers might indeed be entertained to a different sort of question, one referring to the result of a single, particular, ‘proper-named’ and identified instance of that sort of operation or trial from which the frequency-distribution is obtained by many-time repeated trials. But in the answer to a question about a single trial, the frequency-ratios are not knowledge. They are only the racing tipster’s suggestion about which horse to back. His suggestions are based on subtle consideration of many sorts of data, including statistical data, but they are not knowledge.
I have quoted Shackle at length to give the reader a sense of how reading his work might be a useful guide to making certain decisions that are encountered with some regularity in climate science. Epistemics and Economics is partly about economic theory but it is also a book devoted to how rational people can make decisions under uncertainty.
The key bit being the punch line at the end:
But in the answer to a question about a single trial, the frequency-ratios are not knowledge. They are only the racing tipster’s suggestion about which horse to back. His suggestions are based on subtle consideration of many sorts of data, including statistical data, but they are not knowledge.
That is exactly the problem with Climate Models. They are just automated tip sheets.
The next economist that may be of interest is Paul Davidson. Davidson highlights the fact that economics is a ‘non-ergodic’ science. By ‘non-ergodic’ he means that the future does not necessarily mirror the past; just because x happened in the past does not mean that x will happen in the future. He writes:
Logically, to make statistically reliable probabilistic forecasts about future economic events, today’s decision-makers should obtain and analyze sample data from the future. Since that is impossible, the assumption of ergodic stochastic economic processes permits the analyst to assert that the outcome at any future date is the statistical shadow of past and current market data. A realization of a stochastic process is a sample value of a multidimensional variable over a period of time, i.e., a single time series. A stochastic process makes a universe of such time series. Time statistics refer to statistical averages (e.g., the mean, standard deviation) calculated from a single fixed realization over an indefinite time space. Space statistics, on the other hand, refer to a fixed point of time and are formed over the universe of realizations (i.e. they are statistics obtained from cross-sectional data). Statistical theory asserts that if the stochastic process is ergodic then for an infinite realization, the time statistics and the space statistics will coincide. For finite realizations of ergodic processes, time and space statistics coincide except for random errors; they will tend to converge (with the probability of unity) as the number of observations increase. Consequently, if ergodicity is assumed, statistics calculated from past time series or cross-sectional data are statistically reliable estimates of the statistics probabilities that will occur at any future date. In simple language, the ergodic presumption assures that economic outcomes on any specific future date can be reliably predicted by a statistical probability analysis of existing market data.
Or, simply put, the assumption is that “past is prologue”… yet we know it isn’t. This puts the kibosh on the whole modeling deal.
Did the past predict the Younger Dryas? How about “1800 and froze to death”? The hot 1930s, did they predict the cold 1970s? Those cold 1960s and 70s when we were being told to “Be Afraid, Be VERY AFRAID of the coming Ice Age Now!!!”; did they predict the hot 90s and the flat ’00 (oughties or naughties? ;-) ?
The simple fact is that climate, like weather, has a large chaotic component, so it is not ergodic.
Past is not prologue, and time series analysis does not predict the future. This matters rather much more than has been considered.
He also makes the case – and this is of interest to those in other sciences – that non-ergodicity may apply to systems that are very sensitive to initial conditions. That is, systems which are commonly referred to as ‘chaotic’ today.
That is a rather classic description of weather, climate, and climate models: “very sensitive to initial conditions”. Now mix in that the computer models have lots of tuned parameters, have feedback loops of unproven accuracy – so subsequent steps “initial conditions” can have lots of stochastic jitter in them compared to reality – and you have a recipe for the statistical junk that are the present “Climate Models”.
Think “statistical junk” is too harsh? Well, it is based on good theory and a long history of bad practices:
The next economist that merits mention is Tony Lawson. Lawson has gone right back to basics to try to tackle the aspect of uncertainty in economics. He makes the case that recognising uncertainty requires the economist/scientist to occupy an entirely different ontological position – that is, they have to view the world in an inherently different way to the way their uncertainty-free colleagues do. Lawson’s work is massively complex and attempts to build up new epistemological and ontological foundation through which scientists can access truths in the face of uncertainty. I will try to give the reader something of a flavour here. Much of this rests on Lawson’s attack on mathematical modelling as the end goal of science. Lawson claims that only ‘closed systems’ – that is, systems that are both deterministic and in which we fully understand the determinates driving the system – can be mathematically modelled in any serious way.
At this point I’ll leave off the long quotes. Since “massively complex” not being well suited to “simple and attainable”. Folks wanting a deeper dive can hit the link and read more. At this point I think it is pretty clear that Economists are not just looking at pennies and trade, and have a long history of looking at math and models.
When I chose Econ as my major, there was no “Computer Science” degree offered at UC. There as Electrical Engineering – heavy in hardware and thermo, Math Major – with a minor in computer stuff but mostly dense math theory, or Economics – that had a lot of accounting, data collection, and econometric modeling use of computers and some pretty easy theories to learn. I chose Econ (since it can sort of be sold as a quasi-business degree) and proceeded to take a slew of computer classes. FORTRAN, ALGOL, COBOL, Biomedical Applications of Computers, and more. The point? Economists have been at this whole computer modeling thing way longer than “Climate Scientists”, even before there were Computer Science Majors from UC…
The “put downs” of Economists who look into the math and modeling of “Climate Science” just shows how ignorant they are of Economics AND the history of computing. And how entirely devoid of appreciation of the Uncertainty Monster they really are. They have about 75 years of catching up to do to get where Economists are today on Uncertainty and modeling. (A good first step would be dropping the hubris and arrogance and doing a bit of modest introspection… thinking “perhaps, just maybe, I might be wrong. What alternatives are there?”)
Remember that up above, there was mention of a time when US interest rates suddenly and unexpectedly moved up to near the 15% to 17% level. Prior to that, historical considerations said interest rates ought to be stable and dull. “Something changed”, but it isn’t clear really just what.
Sure, Volker raised rates, but The Fed is a reactive organism. It was reacting to “inflation”. But what caused the inflation? Was it going off the gold standard? Massive spending on the Vietnam War? The Baby Boomers creating giant demand in excess of supply? The end of the post W.W.II “Economic Miracle” as the rest of the world got into the manufacturing business too? Some mix of it all?
It really IS an interesting question of just what, or who, sets interest rates. Right now, The Fed is saying they are “data driven”, so have left interest rates near zero for a fairly long time. The BOJ (Bank Of Japan) has had a ZIRP (Zero Interest Rate Policy) for longer, and the EU with more volatility ended up in the same place. IF interest rates were really determined by Central Banks, why have they varied so much and why have they ended up in the same place?
I would assert that The Fed is just a slow and warped mirror of general economic productivity. More economic growth and consumption lead to more demand for money so higher interest rates to convince folks to “save more, consume less” right now. When folks have lots of spare cash (due in part to not consuming) interest rates drop as they all try to buy the same limited investment instruments. In both cases, the Central Bank reacts.
Now that reaction can be good or bad. They might prints tons of money (literally, tons of it…) and cause lots of excess demand leading to shortages and inflation, and eventually higher interest rates. They might be “austere” and keep money tight, leading to a drop of building, hiring, and eventual deflation and lower interest rates. But can they really change the starting base to which they must respond?
The accepted theory is the Keynesian (in some variations) that the level of money and interest rates can change the real economy, but even Keynes himself said it was only in a short 1 or 2 years that could work, then the real economy would dominate the monetary. Assuming Keynes was more right about his own theory than the folks who have followed him (and ignored his advice about short term only); this implies strong limits on what Central Banks can ultimately achieve. Which implies similar limitations on how much they can ‘set’ interest rates.
Is it enough to change an established trend? I think the most we can say is “maybe”. Volker did many things, and interest rates was only one of them. Changes of money supply and a new President with a growth agenda likely did as much, or more, to shift direction of the economy.
So right now we’ve got folks stressing over interest rates being “historically low” with Greenspan on CNBC saying that they can only go up from here (so implying a crash of the Bond Market on the cards, while denying any possible urgency in the timing or prediction of a pending event… typical Greenspan doublespeak.) Yet we are at historically low interest rates.
This has folks with access to major credit (i.e. only the very rich) borrowing like crazy and building all sorts of things in Silicon Valley. Whole neighborhoods being bulldozed and replaced with “Agenda 21” style blocks of 4ish story or more apartments over retail centers – so you don’t need a car… which everyone has anyway… since we can’t all work in retail at home. Yet the economy isn’t going gangbusters. It has an effect, but not the intended one. What the interest rates giveth, the tax rates taketh away…
Greenspan: ‘Abnormally Low’ Rates Will Pop Bond-Market Bubble
By F McGuire | Friday, 04 Aug 2017 09:04 AM
Former Federal Reserve Chief Alan Greenspan warned that “abnormally low” interest rates will pop a bubble in the bond markets.
“The current level of interest rates is abnormally low and there’s only one direction in which they can go, and when they start they will be rather rapid,” Greenspan told CNBC.
Since December 2015, the Fed has approved four rate hikes, but government bond yields remained mired near record lows, CNBC explained.
“I have no time frame on the forecast,” he said. “I have a chart which goes back to the 1800s and I can tell you that this particular period sticks out. But you have no way of knowing in advance when it will actually trigger,” he said.
“It looks stronger just before it isn’t stronger,” he said. Anyone who thinks they can forecast when the bubble will break is “in for a disastrous” experience.”
So while admitting that he can’t predict it either, he’s predicting. OK, got it…
But that ties back to the whole non-ergodic issue. The past is NOT prologue!
WHY are interest rates so low, despite The Fed?
Could it be that we’ve trained an entire generation NOT to trust the stock market and the Financial Sector?
Perhaps it is the $TRILLIONS shipped to Japan and China and Oil States all looking for a place to stash their cash that is not subject to their own economies, voters, and rulers? (Special Mention for the EU putting up a big “DO NOT PUT MONEY IN EU BANKS!!!!” sign in how they handled Cyprus… )
Maybe it’s just that a large demographic bubble of “boomers” is heading into retirement and the “rules” say to reduce risky stocks and buy bonds then.
Does The Fed control the demographics, the foreign investors, or the emotional state of the population? Eh?
Will any of those things “suddenly change”?
Or maybe it is just that any increase in money income is going entirely to the top few percent, and they can’t spend it all and have run out of reasonable investment opportunities as the other 95%+ have already spent all their money and just can’t spend more. Have to put all the chips somewhere…
Could it be that any physical investment in real plant, equipment, and jobs is happening in CHINA? Hard to make our economy move (and thus interest rates rise) when competition is getting all the investments, jobs, income, etc. etc. etc. Make interest rates near zero and watch the factories being built in China…
Then again, perhaps driving regulations through the roof, making energy so expensive you can’t cool your home nor drive to work, and making illegal many occupations isn’t all that good for economic growth, so not much demand for loans. Then again, it could be the draconian swing to near ZLP (Zero Loans Policy) driven by Dodd Frank legislation as they drove the laws way too far into ‘strangulation’ as they realized their prior policy of “free money and houses for everyone” was daft and lead directly to the Financial Meltdown.
The point here is pretty simple. Greenspan is making a non-prediction prediction based on a quasi-ergodic belief system. Many external things can cause an economic stagnation, forcing rates to stay low, The Fed be damned. Prime among them being the Progressive Socialist Stagnation from way too many laws, regulations, Social Justice (an oxymoron…) Programs, fuel and electricity price skyrockets, taxes, and more that all act to strangle the Real Economy of actual investment in physical plant, equipment, and labor. First it stagnates. Eventually they go for massive money printing and ‘wealth distribution’ and then you get hyperinflation… followed by extreme economic collapse.
So just what In The Real World says that is all being changed or reversed? I’m just not seeing it. Trump is trying, and making some progress, but the counter force is huge. IMHO, it all comes down to a bet on Trump winning or not. If he can’t get the crap out of the economy, and large numbers of the rich and powerful do NOT want to give up their slice of mandates and subsidies, it isn’t going to change.
So what do I see in chart data?
Now Greenspan is correct, nothing changes until it does. But this could be days, or decades.
SPY The S&P 500 continues a slow slog upward, along with bonds. All those rich folks have to stick their money somewhere and The Fed is giving them lots of it to stick. That hasn’t changed.
Gold is down from the peak, had a Dead Cat Bounce, and is now in the relaxation / stability run out.
News of “Dollar Weakness!” mostly looks just like the slide of the Euro et. al. has stopped and a bit of bounce at their bottom. More 3 months of stability than anything else.
TLT has roughly constant volume. Most notable being that it is higher than 8 or 9 years ago. More folks looking for security than yield. MACD and DMI both saying an OK time to be in, but not particularly going to make a bundle. It is down from the just prior wobble, so maybe a bit of bounce on a trade, but the ideal entry was at the end of last month, so way late in the trade.
So what’s my point?
Pretty simple, really:
There is a huge Uncertainty at the moment in making any prediction about interest rates. Assuming that this being out of line with ‘the past’ so prologue to a rise is assuming ergodic behavior that is not in evidence.
I see nothing significant in the news flow to say any of the ‘maybe’ issues above has changed enough to matter.
All that, to me, says “Nothing much changing” is far more likely than any other Uncertain outcome. Until something changes, nothing has changed…
BUT: Markets “crash” when the last sucker is in, and everyone wants to exit at the same time. IMHO, this is often “stimulated” by a Fat Wallet shorting their market. This lets them “sell first” and then buy back to cover their position later, after they have herded the cattle off the cliff. So here’s the question:
Is this the time that Fat Wallets, like Soros and Friends, would choose to start shorting the Bond Market? Do they even have enough money to make it happen? The Bond Market is way way larger than the stock market. $Trillions, not $Billions needed to play big shot. So is it the time, and is it even possible? Is Greenspan being trotted out to “startle the herd” off the cliff?