2022 By The Books: Q2

Featuring data science, post-Soviet Russia, altruism and bureaucrats.

Daniel Issing
15 min readOct 6, 2022


Photo by Jason Wong on Unsplash

The Beginning of Infinity, David Deutsch

The subtitle is “Explanations that transform the world” and honestly, how much more Kurzweilian could you name your book? Fortunately for the reader, the thesis is solid than the title may suggests and goes something like this: Infinity seems like a ridiculous concept when applied outside of mathematics (and even there, it is often the source of confusion). Against this, Deutsch wants to convince is that this is shallow thinking: To the extent that humanity is able to generate and maintain open-ended, self-correcting processes, the sky is the limit. It is the great achievement of the Enlightenment to have set in motion such processes in many areas, and as long as we remain faithful to that modus operandi (abandoning theories that have been refuted and not denying anyone access to the marketplace of ideas), growth is not only sustainable but literally boundless. Which also means that, despite acquiring more knowledge with every passing day, humanity will always face an infinite number of problems for which no solutions are known (and an infinite number of problem we are not yet aware of). The legacy of Popper looms large here, and even as someone who believes there’s a lot to like about Popper’s philosophy, I was surprised by how short the critical distance between the two is. I’m not sure the godfather of critical rationalism would have appreciated the irony.

Deutsch does go beyond Popper when he expands on the notion of error correction, traditionally seen as a way of ensuring that bad choices can be reversed: As Popper wrote, the central question is not “who should rule?, [but]: how is the state to be constituted so that bad rulers can be got rid of without bloodshed, without violence?”. Deutsch takes this to a whole new level, arguing that this mechanism is baked into our universe at the most fundamental scale: The fact that many observable quantities are not continuous, but packed up in quanta, is a way to ensure that errors don’t accumulate over time. Essentially the same idea underlies the switch from analog to digital signal-processing. How’s that for a bold claim?

Falsificationism is a crafty tool to disprove fallacious theories. But on the positive side, if theories are always ephemeral, subject to revision at any moment, and we can never be certain that what we believe we know is indeed true, how can there be such a thing as progress? I’m not sure he addresses this question directly; instead, he propose we rather speak of “misconceptions” superseding one another than of “solutions” or even “theories”, both of which suggest a certainty they cannot provide. I think that’s overly negative. While Einstein’s theory of relativity, for example, won’t be the last word on the mystery of spacetime, it is almost inconceivable that it could (with more evidence at hand) turn out to be further from the truth than Newton’s account. Even if we are, epistemologically speaking, convinced that none of our current theories about the universe are perfect, I’d argue that we do have good reasons to think that they are often butter than their predecessors.

On the whole, I applaud the effort to take an idea and generalize it ruthlessly, but I think he does take it a bit too far. A single grand unified theory for physics, mathematics, ethics, aesthetics and politics can only be very vague or hilariously wrong, and Deutsch’s attempt probably belongs to the former. A single chapter on art is not only unlikely to convince anyone who disagrees with his accounts, it also stands little chances to contain enough substance to prove what it sets out to do. Why he chose to add an imaginary (and lengthy) Socratic dialogue instead of elaborating further on the contentious points I do not quite understand.

Towards the end, he diverts the reader’s attention to a theme that deserves greater exposition: How very differently we tend to judge (and act upon) things caused by humans vs. those that are the result of naturally occurring processes. He makes the point around climate change, where almost all our attention is devoted to combatting human contributions to global warming, while very little resources are spent on preventing other events from screwing us over. A supervolcano explosion could be just as bad (or worse) than the combined effect of industrial emissions, but our attitude towards it tends to be “nothing we can do about it”. Reading between the lines, you can feel his dissatisfaction with “fixing climate change” and its focus on reducing consumption, renunciation and sacrifices — in fact, the very idea of “fixing” the climate sounds delusionary to him. While he is not generally opposed to such measures, his point is that none of these will be of any help in the event of a massive eruption. Worse yet, even if they did, that would not be the end of it: solutions are bound to create more problems, and it is an illusion to believe we could fix things once and for all.

Bureaucracy, James Q. Wilson

Not exactly a gripping read and no grand theory of everything either, but delivers on its premises, Rather than one all-encompassing framework about why bureaucracies do what they do, it demonstrates that public agencies are in fact far from monolithic. Not only is the a lot of regional variation in the quality of service or overall performance; efficiency also differs wildly depending on which federal agency you’re looking at. A simplistic self-interested model (whereby government employees try to maximize wealth or power) falls short of actual reality. In fact these agencies operate under many constraints other than the obvious fact that the rely upon taxes for their continued existence.

Libertarians in particular are doing themselves a disservice with their extremely cynical views of the public sector. Indeed because it is not embedded in a competitive framework with easily identifiable metrics (revenue), it often does much worse than the private sector along any number of dimensions. And that may very well be a good argument to limit the number of activities the government actively intervenes in, or to outsource them to private actors. But unless you claim that government shouldn’t exist altogether, an exclusively negative approach to bureaucracy means you won’t be contributing to the conversation very much. Instead, if you agree that the performance of public agencies is not a given, and you care about them doing better, have you considered not deriding them constantly? If public sector employees are uniformly viewed as lazy, corrupt and incompetent, that may not have much of an effect on the current workforce, but it will most definitely affect what types of people will even consider a career there. No need to put on rose-colored glasses and paint every city clerk an unsung hero, vital to the very survival of the nation, yet all else being equal wouldn’t we rather want to have conscientious, hard-working individuals fill these positions?

Finally, having correctly identified that the usual disciplinary mechanisms of market competition are not available, you need some alternative theory how to improve them. As a silly example, any given agency can offer better service merely by being allotted more money, but is that what we should strive for? Double the income tax and next time the waiting time for your new license plate will be cut in half! Ironically, we sometimes even want bureaucracies to be inefficient, or at the very least not to blindly maximize outcomes according to a single metric: Imagine what were to happen if the IRS decided to maximize tax revenue by all means or available, or the police engaged in a no-holds-barred attempt to incarcerate as many potential offenders as possible. Wilson’s book, while refraining for the most part from giving concrete advice, presents the problem a lot clearer than any other text on the subject I’m aware of.

La Carte et le Territoire, Michel Houellebecq

Wherein the author imagines his own murder, an idea he seems to take some undeniable pleasure in. In his characteristic tongue-in-cheek way, his only remorse is that he wasn’t murdered for sinister motives, or ideally for his oeuvre, but for purely material benefits (the murderer stole a valuable painting and used the debauchery as a cover-up, leaving behind the author’s corpse as mere collateral damage).

The protagonist is invariably Houellebecqian; a middle-aged Frenchmen, moderately or even very successful in terms of status and standard of living, yet devoid of any deeper motivations (we never learn what inspired him as an artist) and therefore fundamentally unable to understand how he achieved the privileged position he now occupies in society; incapable of building, much less maintaining, meaningful relationships with other people, and detached from what’s going on around him.

What does the title mean? Map and territory are but one example of a representation (lots of random italicized words in the book) of some physical object or process. Sometimes Houllebecq likes to rub it in your face, like when the protagonist photographs Michelin maps, maybe in some larger sense it’s about the relationship of art to the “real world”. I was expecting something more fundamental, philosophical, but it was either to sublime for me or simply not there.

As per usual, we find a portray of a country, a society, a mode de vie in decline. Just like Les particules élémentaires and Soumission, which I’ve read before, he depicts a hedonism that as given up on everything expect pleasure itself: There is no driving force, no vision of the future, not even a clash of ideas, just the tranquil, uninspired enjoyment of momentary pleasures. In this case, the focus is less on the individual and more on the country as such; it imagines France transformed from a kind of industrial powerhouse to little more than a giant Disneyland, a full-blown tourist destination for wealthy travelers. Michelin embodies this process, having been transformed from a tire manufacturer to a guide for high-end hotels and restaurants.

Houllebecq is not a man for silver linings, much less happy endings, but one has to wonder if the fact that Michelin’s main source of revenue, then as now, springs from from honest-to-God manufacturing, isn’t the glimmer of hope shrouded in irony that he couldn’t resist including.

Chanson Douce, Leïla Slimani

Never did a novel remind me so much of my high school French textbooks. Especially the first half of the book is often written as if it was intended for people who hadn’t yet mastered the language, here is one example:

“Myriam est au burau avant 8 heures. Elle est toujours la première. Elle n’allume que la petite lampe posée sur son bureau.”

It gets better with time but the tone is at times almost infantilizing. The portrayal of a typical (?) Parisian middle-class family and their attempt to organize their life as little as possible around their kids rings familiar and is captivating in its depressiveness — although it remains an open question how faithful it is to the real-world events (which took place in NYC).

Doing Good Better, William MacAskill

An introduction to effective altruism (EA), written when the idea was still a fringe concept. A lot of it covers the fundamentals and is likely to be too basic for most; indeed it is hard to imagine nowadays what a splash the book caused when it appeared. The style borrows heavily from Malcolm Gladwell, and in substance it shares much of the upbeat optimism the typically pervades books of this genre: Think a bit harder, trust the science, and the rest will flow. Unlike the Gladwells of this world, however, EA asks for radical changes — not just in the abstract, but for you personally, dear reader.

Many of the arguments in the book, while offering an interesting perspective, are likely to leave the reader back bewildered for being completely oblivious to straightforward objections. One particularly confused example is that of voting, an act which the author claims is the equivalent of several thousand dollars donated to charity. The argument goes something like this: Yes, an individual vote is very unlikely to swing an election. But, because so much money is at stake, in the incredibly unlikely event when it does indeed make a difference, the consequences (spread out over the entire population, of course) are massive. The five minutes it takes to fill out the ballot could be the most effective things most of us have ever done. Really?

For someone who has a repudiated of being a hard-headed, coldly calculating utility-maximizer, it sure seems a wee bit naïve to equate the total budget approved by Congress for the legislative period with the net benefits of swinging an election? And even if this were so, are we confident that the average voter could reliable pick the party that would deliver this massive windfall gain?

Despite the largely negative tone of the above, I am generally supportive of EA and believe that we should both donate more and think harder about how to donate. But while I’d like to see much more EA at the margin, things are getting hairy when pushed to the extremes. This is also why existential risk is getting so much attention in the community: Any course of events that has the potential to wipe out humanity, while maybe not infinitely bad, is certainly the worst thing that could happen, so no matter how shaky the reasoning behind, if it’s not literally unthinkable (or maybe even then?), massive countermeasures are required. Which leads some EAs to panic about the possibility of fundamental particles experiencing pain and pleasure and other, shall we say exotic, concerns — I swear I’m not making this up!

Overall, this could have been a great book, and even in its current form it does contain many valuable insights and entertaining thought experiments. A little less wisecracking and a larger does of plain old common sense would have gone a long way.

On Anarchism, Noam Chomsky

Not actually a book but more a poorly organized collection of articles and lecture notes? speeches? and essentially the equivalent of some blogger trying to monetize their posts by turning them into book chapters. Not that it makes up in substance for what it lacks in structure: Almost all of it is rambling, accusatory, and often purely semantic (many a true Scotsman can be found); the modus operandi is making grandiose claims about how things should be without ever so much of an indication as to how to achieve this blissful state of affairs. Syndicalism sure does sound great in theory, but there aren’t a great many successful examples to point to, or if I’ve missed them, Chomsky does not bother to mention them either.

The rhetoric is distinctly antiquarian; for all the talk about workers throwing off the yoke one has to wonder if a quant working for a hedge fund (“forced to rent out himself in order to survive”, as Chomsky puts it) is really so much worse off than a perfectly independent owner of a mom and pop store? There are some interesting claims about classical liberal writers such as Smith, Humboldt, Tocqueville or Mill and an attempt to reclaim their works for his cause, but ultimately his account is rather speculative and unlikely to convince anyone except the gullible.

Secondhand Time, Svetalana Alexievich

Started listening to the audiobook (a production I can recommend) for entirely predictable reasons: Trying to make a little more sense of the Russian invasion in Ukraine and the strange ways in which another country would be both considered a brother and a mortal enemy. The Russian national character, to the extend that such a generalization makes sense at all, is something that always seemed strange to me, almost incomprehensible, and the number of expat Russians I’ve met who’d literally go through the roof if their beloved country or president was criticized… Now that I’ve finished Secondhand Time, I don’t think I understand their attitudes any better — it just adds more (and more bizarre) examples to my list.

What it did help me understand is why liberal society never really got a foot in the door after the fall of communism, and how the entrenched version of crony capitalism that replaced it made ordinary citizens very nostalgic, not to say amnesic. It is less clear why other Eastern bloc countries got out of it so much better — the Baltics, Poland or Czechia have mostly escaped Russia’s fate. Is it because power in Soviet times was centered in Moscow? All in all, this is a fascinating piece of oral history, with all its ambiguities, weirdness and at times shocking brutality that characterized the end of the Soviet Union.

The Way Of All Flesh, Samuel Butler

I’ve got to be honest, I put this one down maybe 1/3 or so into it. Butler is (was?) hailed as a tireless critic of Victorian hypocrisy, but does anyone need to be told about this anymore these days? (If anything, the Victorians are probably underrated today.) Very slow exposition; in short just a book that entirely failed to grip my attention, try as I might. I don’t think I could even produce a summary anymore!

Weapons of Math Destruction, Cathy O’Neil

A book that thrives on a clever pun rather than on content. When it was published in 2016, data “science” — or, really, big data — was still on the rise, and you could unironically write articles about how it’s the sexiest thing in the 21st century. [The field has since expanded enormously and is hardly considered an unusual career path anymore.] But even back then, O’Neill’s promise to look at the darker side of that rapid rise to fame did not fall on deaf ears.

Now I’m as biased as it gets, making a living in that very sector myself, so take the following with a grain of salt. And I should highlight that she’s not so much intending to make the case against data science as she’s arguing against bad data science, of which there is arguably a lot out there. I just think it’s a very poor case.

As I hope most of you would agree, finding examples of algorithms gone awry and bad statistical practice isn’t all that hard to do. But if you want to make the case that the answer isn’t simply “let’s try harder to write better algorithms”, but that something fundamental is wrong — possibly beyond repair — with the entire approach of data-driven decision making, you have to try a little harder than to present a few case studies and call it a day. Her argument seems to be twofold: One, that algorithms are often unfairly (if only implicitly) biased against marginalized groups, and that this has to be so because the are evaluated by efficiency metrics. Two, apart from not encoding universally shared notions of fairness (more on that later!), they often drive us in socially undesirable directions because the data on which they’re trained often is unable to capture “the thing itself”, and instead has to rely on proxies. One example of this is universities entering an arms race to attract prospective students by gaming the ranking algorithm: Because “quality of education” is hard to measure, such algorithms may instead track easy-to-quantify characteristics, such as the number of indoor swimming pools on campus, that correlate with it. So because Stanford and Harvard have more of those than the average, everyone start building pools like crazy in a desperate attempt to pull in more undergrads, while admission fees are skyrocketing.

Wait, really?

What I believe is quite well-understood by now is that algorithms are never perfectly objective, but may very well contain (and amplify) the biases of their creators. Fine and well — there probably was a period during which we deluded ourselves (some of us, anyway) that by outsourcing judgement to machines, we may get rid of subjectivity. That turned out to be wrong [citation needed]. Anyway, the alternative to having computers support our decision-making is to rely on human judgement alone, and — lo and behold! — that doesn’t seem to be exactly free of bias either. Which of the two does better, and under which circumstances, is in principle an empirical question, and one that, ironically, data science could help us to answer. But strangely enough, for a book about misapplied statistics, no attempt is made to quantify their relative performance, or to perform any kind of cost-benefit analysis comparing the two.

I was particularly disappointed by the caricature that she, an expert from the field after all, erects about the inner workings of big data. The reader may be forgiven if the impression she gets from the book is that data scientists will spend some time training a model until it performs really well on the test data, and then just kinda deploy and letting it run on autopilot, results be damned. In most if not all cases, devising and training the algorithm is unlikely to be a major part of a data scientist’s work.

There’s a lot of talk about fairness in these chapters, how technology is biased or unfair (often used synonymously). Much of that discussion, however, assumes that the heavy lifting has already been done — that there is some widely shared notion of fairness that we can use as a backdrop to judge various algorithms: an optimistic assumption if ever there was one. Her benchmark seems to be something along the lines of “a person should only ever be judged by what she had a chance to influence”, which sounds great on paper, if not exactly uncontroversial. [I would think it unlikely, and also rather undesirable, that we ever get down to this level of detail.] My understanding is that she’s 100% happy to bite that bullet and objects to the very attempt of grouping people together along any dimension. Many things could be said about this unrelenting attitude, here I would just highlight (ignoring the whole contentious issue of what we believe to be just) that justice itself is costly, and the alternative to “occasionally getting a bad deal sometimes because of the mental box you were put into” may very well be increased costs of service across the board.

Towards the end, she points out that rather than forcing all of humanity under its yoke, big data actually has a disparate effect on the masses than it does on the very affluent: Rich people can escape being seen as mere data points and get individualized treatment, while outcomes for the poor are defined by which cluster they happen to fall in. I agree with this! Where I beg to differ is with her implicit insinuation that this problem is somehow unique to computer algorithms. Almost any consumer good or service can be individualized if you are willing to spend (a lot) more money on it; indeed this is one motivation for wanting to be rich in the first place.

For earlier reviews, see : 2018, 2019, 2020 (1), 2020 (2), 2021 (1), 2021 (2), 2022 (1).



Daniel Issing

Book reviews, trail running, physics, and whatever else I feel like writing about.