I have finally finished reading Nassim Taleb's latest book Antifragile, including the technical appendices and the afternotes, which take some serious study to really understand. I have to say it has been one of the most important books I have read in years. I have believed and argued for many of his assertions for years, but never really quite understood why I believed them. His expositions have made the reasons clear to me.
For example, I have believed for years and written in this blog repeatedly about the difference between the original entrepreneurs who built a company (think Bill Gates or Steve Jobs or Henry Ford) and the hirelings who later become overpaid CEOs of the same companies. Now I really understand. The difference is that the original entrepreneurs had "skin in the game". That is, they took substantial risks in starting their companies (the vast majority of new companies fail) and could have lost it all, so they deserve the rewards of success. Later CEOs have almost no skin in the game. The worst that can happen to them is that they eventually get fired with a golden parachute. For them, it is all upside (millions in potential pay and bonuses) and no downside.
Much the same can be said about the current political process. Politicians implement all sorts of laws and regulations and policies which affect everyone else but don't really impact them. They have almost no "skin in the game". If the policy is a disaster, they are largely spared the effects.
For example, I have known for years that lots of risk estimates are essentially meaningless. Now I really understand why. People "estimate" the risk of, say, a 100 year flood level or a simultaneous failure of three levels of safety backups in a nuclear plant. But an "estimate" is meaningless without including an error bound (a measure of the probable error around the estimate). But for rare events, hard enough as it is to estimate their probability, it is essentially impossible to define the error bound. So the estimate is essentially meaningless, except to give people a false sense of security (think Fukushima nuclear power plant). And the further out on the distribution "tail"one goes (the rarer the event), the more a tiny error in the estimate produces massive changes in the probability.
And of course his central triad (fragile - robust - antifragile) is a profound new idea. The difference between things that just manage to "survive" volatility (robust) and things that actually profit from and improve from volatility (antifragile) is a critically important distinction. As is the observation that complexity and sophistication in design generally leads to fragility, while simplification and interfering less with the natural processes generally leads to antifragility.
I find fascinating his argument that most healthy systems stay healthy precisely because the stressors placed on them (up to a point) by randomness and volitility help to keep them healthy, and reducing or eliminating the volatility actually leads to a decrease in their robustness. Considering that most of our public policy is aimed at reducing volatility (by, say, smoothing out business cycles, or regularly eating three meals every day), we are clearly headed in the wrong direction.
There are perhaps 20 or 30 other observations in his book which have produced similar enlightenment. If you haven't read this book, do so. But be prepared to work hard. Many of his ideas run counter to "conventional wisdom", and it often takes work to understand the underlying logic of his arguments.