Height advantage in hiking

2 min read

For an outdoorsy, not-so-tall girl, it’s not uncommon to wind up at the back of a pack of significantly taller, male hiking companions. Sweaty and panting, I watch their backpacks recede further away up the trail, and even the sweep guy might abandon his role to bolt around me. In an endurance situation, mental fatigue sends the foggy brain into rhythmic, ineffectual loops. Unable to do mental arithmetic while moving, one can only see that the negative space triangle formed by others’ legs is larger for taller people, and imagine that this reflects some advantage… but how much advantage?

Later, off the trail, pen and paper in hand, one can focus on calculating the magnitude of how much this height advantage adds up to, in terms of explaining how a physically fit person might lag so far behind:

Height of the taller person inches
Walking cadence steps per minute
Stride angle (angle between legs at full extension point) degrees
Height of the shorter person inches
The taller of the two hikers, being inches tall, has an assumed leg length (measured from hip joint pivot point) of inches. Given his stride angle of degrees, he takes steps that are inches long. At his walking cadence of steps per minute, he thus hikes at the rate of miles per hour.

Meanwhile, the shorter person has an assumed leg length of inches. Despite using the same stride angle and walking cadence as her companion (i.e., putting in the same amount of effort), each of her steps is smaller and she therefore covers ground more slowly...merely due to being shorter!

In order to keep up, the shorter person must work harder, by either:
(a) making her small steps more rapidly, at a faster cadence of steps per minute; or
(b) matching her companion's same walking cadence, but making each step longer by using a wider stride angle of degrees. (As efficiency-conscious runners well know, increasing step length beyond what is optimal for one's height has a dramatic effect on tiredness.)

Alternatively, if the shorter person exerts only the same effort as her taller companion, she will fall behind miles per hour of hiking. In such case, the taller person will have to wait (and get to rest!) minutes every hour, while waiting for the shorter person to catch up.

[Note: We’ve made the simplifying assumption of leg length as a fixed proportion (45%) of overall height – a reasonable constant, given that average ratios of leg length to height, and step length to leg length (a function of stride angle, which correlates positively with speed) enable trackers to infer height from footprints.]

Other factors driving differential physical effort between two companions are undoubtedly afoot during a hike: aerobic fitness, anaerobic endurance, strength-to-weight ratio, movement/form efficiency, backpack contents, stomach contents, sufficiency of recent sleep, injuries, performance of clothing/gear, and who’s chatting more than listening. Still, the point here is that leg length alone has a substantial impact on rate of travel. Regardless of which physical issues contribute to the exertion asymmetry, the optimal solution for both hikers (assuming they value fairness and social interaction) is to “put Herbie in front” — i.e., have the disadvantaged hiker set the pace.

Eli Goldratt’s 1984 classic The Goal vividly illustrates this principle of operational efficiency with….a hiking example! Herbie (the fat kid in the book; in our case, the short hiker) is the bottleneck. When the fast kids hike at their own pace with Herbie in the back of the single-file line of boy scouts, Herbie falls behind. They impatiently wait for him at trail intersections, but only to immediately take off hiking again as soon as he catches up and before he catches his breath. Herbie gets more and more tired, and thus even more physically disadvantaged, since fatigue initiates a negative feedback loop in terms of physical performance. Meanwhile, the fast kids get periodic rest, and so the effort differential increases from both directions. Putting Herbie in the front of the line — combined with distributing his backpack load among the fast kids — ensures that the hikers stay together and evenly spaced, and that the physical effort difference is somewhat lessened. (The effort saved by fast kids hiking slower than their capabilities is less than the effort saved by Herbie avoiding being in chronic, desperate catch-up mode.)

Posted in <5 min read, Interactive calculators, Main, Math is everywhere!, [All posts]

Feminist kiteboarding

4 min read

A friendly shout out to last weekend’s all-male kiteboarding gang:  Thank you for not assaulting me! 

Seriously.  I am pleased and grateful to have been treated like a mundane, non-gendered human being.  Nobody hurt me, treated me like fresh meat, or shunned me for declining to be cajoled into sex. 

Over the past 6 years, I’ve taken 16 unaccompanied, overnight kiteboarding and snowkiting trips, traveling from 250 to 5000 miles from home to enjoy my all-time favorite sport.  On 10 of those trips (63%), I’ve been hurt, in my capacity as a woman, by a male kiter. 

That’s the world we live in.  Also, that’s how much I love kiting. 

The most painful thing they do is proposition you and ostracize you when you decline.  Maybe that doesn’t sound that bad?  Do you need to hear about the more salacious incidents to feel moved?  Being cast out of the village is an age-old punishment for non-compliance with the social order.  It hurts deeply.  Twice I’ve sat alone on a dark beach on Christmas Eve, sobbing into my dog’s furry shoulder.  For me, the invalidation and invisibility of that type of injury pains me more than physical aggression.

Not only is social isolation existentially painful, but in any case, kiteboarding is by definition a group sport.  One needs other people:  to drive multiple cars for downwinders, to exchange tips about aerial tricks, to take photos of one another, to pool emergency spare parts on a beach far from kite shops, and — most importantly — to share apres-kite camaraderie.  Then there is teamwork of launching and landing kites (which is the highest-risk step of kiting), though in my case I have a trick to use a car or log to safely self-launch.

More often than not, when I have explained this reality, the listener suggests I should stop kiting. 

How such an outrageous idea escapes anyone’s mouth is baffling to me.  The news profiles a girl in an Islamist regime who defies death threats to continue playing her beloved sport of basketball…and those same people who suggest I should re-examine the prudence of my kiting hobby – they recognize and decry the unfair, and presumably un-American, curtailment of freedom, liberty, self-expression, and self-determination.  Is her transgression so much less threatening to you, because she’s not here asking you to include her as a peer, but rather she’s far away and unthreateningly playing on a team with her own kind?  In America, as well as in less wealthy and less democratic nations, women face higher external risks than men do, in order to play the same games and enjoy the same weekend activities.

Despite being born with different genitals, we women have the same desire as men do — to play, to explore the world, to experience intensity and challenge in sports, to be physical in our bodies, to revel in nature and touch wilderness.  Hey, are you mentally preparing a few contrarian data points to insist otherwise?   Don’t.  This is something we know – as much as scientific method can enable us to know a thing.  Study after study shows that, to the limited extent that gender-correlated differential desire to play sports exists, it reflects strategic adaptation to social constraints and cultural norms.  Essentialism is the lazy, responsibility-abdicating rationalization of the unconsciously-privileged and ethically-compromised. 

Women are routinely asked to change and constrain normal human behavior, to change our walking route, our clothing, our housing situation, our hobbies, and our vacations in order to reduce hate crimes against us.  Society is slowly awakening to the absurdity of demanding that would-be victims self-segregate and self-censor.  Rational people will someday soon agree that it is men who must change their behavior and make different choices.  Rape is only and entirely the fault of the rapist – it is not in any way ever the fault of a woman exercising her equal human right to stay out late at a party, enjoy a starlit walk on a beautiful beach, or wear a tank top in hot weather.  Ostracizing or badmouthing a woman who ignores or deflects advances is only and entirely a moral failing of the man – it is not in any way ever the fault of a woman joyfully pursuing a beloved pastime. 

Consider that I, too, find adrenaline sports intensely erotic; but, I don’t believe I’m automatically entitled to co-opt your body and your experience in service of mine.  Remembering that everyone you encounter is living a life as complex as your own, with their own oceanic stories, fears, and desires – that’s the hard and simple trick to treating women like people.

Did you know?  In this country, women are sometimes killed for trying to leave a relationship and upon filing for divorce.  Every year, there are incidents where a woman is physically attacked or murdered for not saying yes to a date, giving out a phone number, accepting a car ride, or consenting to sexual activity with a stranger on the street.  Our physical integrity is neither universally socially respected nor politically guaranteed.  “Consent. Or I’ll make you consent” is a widespread attitude among men.  Consequently, for women, the better part of valor sometimes is acquiescing to undesired interactions, rather than risk a battle.  (Hence, the woeful underestimation of non-consensual sex.  Hence, the welcome new custom of seeking affirmative consent.)  More often, in response to being rebuffed, the man questions your sexual orientation, calls you ugly names, attacks you online, sends you a torrent of hateful threatening messages, tells everyone he slept with you anyway, starts dangerous rumors about you, or socially ostracizes you.  Saying “no” is risky. 

So, I am sincerely grateful that last weekend at the beach was blissfully drama-free. 

Though, it didn’t stay that simple…

But first, an aside on gratitude:  I am tired of being told to be “grateful” for the “compliment” of not being too old or too ugly to be a sex object.  Indeed, the only other female kiter I’ve since met at that kite spot (whom others reference by shorthand as “the heavyset one”, when actually all they need to say is “the woman”, just like in the snowkiting paradise of Sanpete County, Utah one only needs to ask where “the bar” is because there’s only one…) insisted to me that she has never once been excluded or hurt from male kiting gangs, and I thus have suspiciously bad luck.  Arguably, the boys are correct that I could allow myself to feel grateful; if they exclude me it’s only because I’m appealing.  In their worldview (lately normalized on a grand stage by President Trump), women’s worth is tied to appearance.  But in my experience of human existence, a superficial compliment doesn’t cancel out the pain of exclusion.  The two categories have no exchange rate.  Apples and ocelots.  For me, one has absolutely nothing to do with the other.

One must also be thoughtful to recognize that “gratitude” is the infamous benediction from rapists; they are known to tell victims to feel grateful for being good enough to be chosen.  On one kiting trip, I passed out from drinking (for the first time in 20 years, which I resentfully feel compelled to stipulate, because we judge women more harshly for smaller transgressions), and woke up to a man telling me I should be grateful that he didn’t do anything to me — that I was apparently good enough to be spared.  Either way, they choose for us, and we are deflated by the reminder of our chronic vulnerability.

The fantastical, heart-warming uneventfulness of that particular kiting weekend at the lake has been succeeded by a more familiar dynamic:  About half the kiters (“male kiters” being virtually redundant) shrugged off my offer to trade contact info.  The privileged and tone-deaf dismissal goes that I should await “word of mouth” (though they coordinate amongst themselves using telecom tools, not word of mouth).  Or, they pull a Mike Pence, shunting me off to connect with the non-kiter girlfriend with whom I have little in common.  I was left wandering down a beach one moonless midnight looking for someone who invited me, but who dropped an erroneous GPS pin, and then inhospitably left me to pitch a tent in the parking lot until the light of dawn welcomed me instead.  One guy texted me he’s not going to the lake, at the same time it turned out he told the gang that he was going (and did go).  I’m the only kiter from the group not invited to a snowkiting networking social night in town, despite two kiters asking the host to invite me (and me having more snowkiting experience than most of them).  One of the non-kiting girlfriends is the only woman at that gathering; she wonders why I’m not there but stays silent, despite knowing full well how much I long to be included.  So, not only is it #notallmen, but it is #somewomen who can’t relate their abstract civil rights platforms to concrete situations right in front of them where they could make a difference.  Overcoming the bystander impulse is hard, even for those who rail against those who don’t overcome the bystander impulse. 

Obviously, kiteboarding sub-cultures have no reason to be immune to the biases and bad behaviors of wider society.  But, running into that social imperfection in an otherwise idyllic context surprises me every time.  For me, kiteboarding is an incomparable peak experience where the boundaries of Self dissolve, the chaotic brutality of wind and waves elevates nature to the sublime, and through the sacrament of this sport, I glimpse the infinite.  It’s totally worth it. 

Posted in <5 min read, Kiteboarding, Personal, Social issues, [All posts]

Ezekiel Bread and Reading Comprehension

4 min read

Food for Life Baking Company makes sprouted whole-grain “Ezekiel 4:9 Bread”.  The southern California-based company characterizes its product as “crafted in the likeness of the Holy Scripture verse Ezekiel 4:9 to ensure unrivaled honest nutrition and pure, delicious flavors”.

Who is Ezekiel?

Ezekiel was an ecstatic prophet of doom in 6th-century BCE Judah.  Canonicity of the scripture attributed to him was controversial among Second Temple Jews; but, it was ultimately included in the Hebrew Bible.  Consequently, Ezekiel is considered a major prophet in Judaism, Christianity, and Islam.

The Book of Ezekiel is filled with fanciful, proto-apocalyptic imagery.  In Jewish tradition, the opening chapter’s strange, psychedelic vision is so “dangerous” that it should only be read by (male) adults.  The majority of the famously-bizarre New Testament Book of Revelation is borrowed directly from The Book of Ezekiel (along with borrowings from the Book of Isaiah, Book of Daniel, Book of Psalms, and several extra-canonical Jewish apocalyptic scriptures).   

What does Ezekiel 4:9-13 actually say?

Ezekiel is four chapters into describing an extended religious vision:  The angry Israelite god, Yahveh, tells Ezekiel to make bread from a combination of then-common cereal grains and legumes, to be baked over human feces.  Yahweh explains that Ezekiel’s suffering from eating this offensive bread constitutes a divine sign to all the Israelite people – symbolizing their imminent, horrific divine punishment for incomplete devotion to Yahveh.  Yahveh says his wayward people will experience the demolition of Jerusalem, violent death of 2/3 of the population, foreign exile, and having to eat ritually impure food while in exile.  In subsequent chapters, Ezekiel goes on to preach to the Israelites that they must accept impending annihilation by the Babylonians as just punishment for their transgressions against Yahveh.

The bread recipe in Ezekiel 4:9 is explicitly for punishment, not nourishment.

What should Biblically-accurate “Ezekiel bread” contain?

  1. hitta = durum wheat
    • Food for Life’s Ezekiel 4:9 Bread substitutes modern common wheat
  2. seorah = barley
  3. pol = fava beans (aka broad beans)
    • Food for Life’s Ezekiel 4:9 Bread substitutes soybeans, which weren’t grown in the Near East / Middle East until a few centuries after The Book of Ezekiel was written
  4. adasa = lentils
  5. dohan = probably millet (the Paleo-Hebrew word is a hapax legomenon – i.e., occurs only once in known writings — so its meaning had to be inferred by scholars from an Akkadian homolog)
  6. kussemet = emmer wheat (aka true farro)
    • Food for Life’s Ezekiel 4:9 Bread substitutes spelt, which is a softer-hulled hybrid of emmer

The Hebrew text says nothing about sprouting the grains.  Pre-soaking has always been a technique to make edible food from unmilled grain.  However, the Hebrew text says nothing about whether to keep the grain whole or mill any of the bread ingredients.

The Hebrew text also says nothing about yeast or salt, which are ingredients in Food for Life’s Ezekiel 4:9 Bread. 

Health claims

Ezekiel bread is certainly healthy – but not because it follows a (misinterpreted) 2500-year-old Bible verse. 

First, commercial Ezekiel bread doesn’t contain any additives or preservatives — which is why it’s sold in the refrigerated section of grocery stores.  It is widely understood that is likely healthier — though impractical for mass distribution and feeding the planet.

Second, Ezekiel bread uses some “ancient grains” that were long-ago displaced by much higher-yielding, hybrid grain species that can tolerate a wide range of climates and require less processing effort.  That critical crop innovation (combined with irrigation) made possible the past few thousand years of rapid human civilization growth.  Today, wealthy Westerners are re-discovering these grains because they’re tasty and also have higher protein content relative to gluten.  But, a bread recipe using high carbon-footprint cereal grains and vegetables (and baked using smog-creating excrement fuel) cannot feed the world – and neither reflects lost natural wisdom nor evidences divine knowledge about human nutrition.  (Note that Hebrew biographers of the omniscient Yahveh didn’t record him mentioning the gluten-free “superfood” grains amaranth and teff, which were thriving crops in the Americas and Africa during Biblical times.)

For all of human history, coarse foods were for poor people and refined foods were for rich people.  However, humans have recently realized that whole grains are more nutritious than refined grains (bran and germ removed).  In a relatively abrupt reversal of millennia of food economics, whole-grain bread is now more expensive than refined-flour bread.  So, if Ezekiel 4:9 implies use of whole grain flour (it doesn’t) or unmilled whole grains (it doesn’t), that would have been meant to convey low quality – not purity.  Accuracy in Biblical literalism requires reading the text through the lens of the time period in which it was written.

(Note: Anachronistic reading of the New Testament Gospel of Mark and Gospel of Matthew leads to similar interpretive confusion:  Just before being crucified, Jesus declines wine (oinos) laced with bitters.  That sounds awful…unless you know that, at that time, wine adulterated with bitter substances was used to dull pain.  Is the story writer’s point that Jesus declines the drink because it’s unappetizing (i.e., offered to him out of cruelty), or because its anesthetic properties would fog his experience of suffering (i.e., offered to him out of sympathy)?  We have no way of knowing.  Later, Jesus is offered “sour wine” (oxos) in a contextually-clear gesture of sympathy.  This is also confusing for modern readers unless you know that, at that time, sour wine was a common beverage perceived as refreshing.)

Background on wheat species

  • Einkorn wheat (aka “farro piccolo”)

Triticum monococcum

Hulled diploid wheat

Domesticated ~8000 BCE

  • Emmer wheat (aka “farro medio” or “true farro”)

Triticum turgidum dicoccum

Hulled tetraploid wheat; natural hybrid of two wild grasses

Domesticated ~8000 BCE

  • Durum wheat

Triticum turgidum durum

Naked (no hull) tetraploid wheat

Developed by human artificial selection from emmer wheat ~7000 BCE

5% of today’s global wheat crop

  • Spelt wheat (aka “farro grande”)

Triticum aestivum spelta

Soft-hulled hexaploid wheat; natural hybrid of emmer and another unknown grass

Domesticated ~5000 BCE

  • Common wheat

Triticum aestivum aestivum

Naked hexaploid wheat; hybrid of durum and spelt wheat

Developed ~1000 BCE; cultivars of this species then bred for increasingly higher gluten content

95% of today’s global wheat crop

Posted in <5 min read, Food, Religion, [All posts]

Decision analysis of friending versus dating

3 min read

Online matching platforms are best conceptualized as friend-finders rather than date-finders.  Given how difficult it is to find compatible people in the world, it is irrational to rule out the possibility of friendship (or activity buddy, or professional connection) being the optimal path with a cool person you meet online.  

Let’s say that, for a particular guy I’ve just met and like, my assessments of the future are:

  • Probability of a long-term romantic relationship success = 33%
  • Probability of remaining friends after a short-term dating fling = 25%
  • Value to me of a friendship with him = 80 (on a scale of 1-100)
  • Value to me of short-term dating with no ongoing friendship = 30
  • Value to me of a long-term romantic relationship/partnership with him = 100

Let’s visualize my assumed probabilities and values of potential outcomes as a decision tree [click on image to open larger in a new tab]:

decision tree_outcomes

If I ignore risk (i.e., probability of success), I might jump to the conclusion that dating is a “better” strategy than friendship.  So, let’s calculate the expected values (i.e., probability % * outcome value) of my two strategy alternatives:

decision tree_expvalues

Expected value of dating is calculated as 36.  Expected value of friendship was given by me as 80.

Therefore, if my preference is to maximize expected value, I should befriend this guy instead of dating him.  This would be the risk-neutral, probability-weighted, “rational” strategy.  

However, if I prefer a strategy of maximizing potential value, I should date him.  I have a 50% chance of extracting more value from the connection by dating him than by friending him.  I could end up with a high value of 110 (16.8% chance) or 100 (33% chance)….or a low value of only 30 (50.3% chance).  This is a risk-seeking, high-Beta gamble for me.

Risk tolerance and decision optimization approach (i.e., strategy preference) is specific to each person in each circumstance.  

Now let’s say the guy in question has done his own personal decision analysis as well (by filling out the interactive calculator below!), and he decides his preferred strategy is dating me.  He may value friendship less than I do — perhaps because he has lived in this city for a long time and already has plenty of friends.  Or, he may think I’m the best thing since sliced bread; so, he sees immensely more value in a romantic partnership compared to a friendship, and he’s perhaps also pretty confident that it would work out between us.  

He can’t do much about my decision criterion of preferring to maximize expected value.  But, he could still convince me to date him if he provides information that updates my initial assessments of the future.  For example, as I spend time with him, I might estimate a higher likelihood of long-term romantic relationship success.  As I meet his ex-girlfriends with whom he is still friends, I might perceive an increased probability of maintaining a friendship after dating him (thus limiting the downside risk of getting romantically involved).  As I realize he’s an upstanding feminist gentleman, I might update my valuation of a short-term fling.  [See my essay “Game theory of hookups” for detail on how critical a man’s demonstrated integrity is for increasing his appeal to women for short-term intimacy.]  Most importantly, as I get to know him and experience his appealing qualities, I might see him in a different light and come to value a long-term romantic outcome with him more than I do initially.  In other words, the more he starts to seem worth the risk and/or represents a lower risk, the more likely I’d want to take the risk.  And, of course, this all applies equally to both parties considering the friendship-versus-dating question.  

Input your own assumptions in this interactive calculator!:


Input your assumptions:

  • Probability of long-term romantic relationship = __33__ %
  • Probability of friendship after short-term dating = __25___ %
  • Value of friendship = __80___ [1-100 points]
  • Value of short-term dating = __30___ [1-100 points]
  • Value of long-term romantic relationship = __100___ [1-100 points]

Calculated results and recommendations:

  • Incremental value of long-term over short-term romantic connection = __70__
  • Expected value of dating = __36___
  • Maximum potential value among all theoretical outcomes = __110___
  • If you prefer a strategy of maximizing expected value, then choose ___friendship_______
  • If you prefer a strategy of maximizing potential value, then choose ___dating_______
  • Probability that potential-value-maximizing strategy yields better outcome than expected-value-maximizing strategy = __50___ %


Posted in <5 min read, Decision quality, Interactive calculators, Sex and relationships, [All posts]

Suffering and its discontents: Reflections on the Bronze Age Collapse

8 min read

Once upon a time, there was a super-regional trade network of specialized economies that supported widespread prosperity and prolific innovation and creative output.  Then the climate shifted abruptly to cause drought and plummeting agricultural yields, stateless marauders suddenly appeared on the scene, and a sequence of earthquakes crumbled cities along a major fault line.  In less than one century, trade routes were severed, wealthy cities burned to the ground and were permanently abandoned, powerful governments dissolved, and refugee populations wandered the region. 

That was the Bronze Age Collapse of ~1225 to~1125 BCE.  Civilization around the Mediterranean and Near East fell abruptly into a Dark Age of scarcity and bloodshed, from which it took 200 to 500 years to recover, depending on the particular region.

Societies are known to have survived acute droughts and famines in the past.  Societies have survived foreign invasions.  Societies have survived devastating earthquakes. 

But not all at once.  

All together you get one of the largest-scale social system collapses in the history of humanity.  Scholars suggest it was at least as dramatic as the fall of the Roman Empire and subsequent European Dark Ages. 

On an individual scale, we can also note that people have been known to survive losing a loved one.  People have survived being financially wiped out.  People have survived losing a home, or being displaced from their homeland. People have survived the cruelty of being excommunicated from their family, or the isolation of being suddenly ostracized from a social circle.  People have survived job loss, being shut out of their career, and thorny legal entanglements.

But not all at once.

All together you get the hyperbolic trials of Job – an evisceration of life so improbably comprehensive that it only exists in Biblical folklore to make a dramatically unsubtle point about the inscrutability of innocent suffering.

A recap of the Hebrew Bible’s Book of Job, written ~400-~350 BCE in Jerusalem:

  1. A man named Job loses everything at once: his children, his livestock and human slaves, his health.
  2. Smug “friends” of Job flail around for an explanation. They themselves have never suffered as comprehensively and profoundly as Job. They self-servingly say it must all be morally deserved suffering.  But the text has gone to great pains to impress upon the reader that poor Job is impeccably righteous and undeserving of any punishment whatsoever.
  3. Job is left bewildered by the false accusations of the self-satisfied bystanders. Since there’s no afterlife (the Hebrews didn’t believe in one), he doesn’t even get some lame platitude about posthumous scale-balancing in which to take meager comfort.  (Hellenistic mystery cults will shortly invent the theological platitude of individual salvation and afterlife.  A sect of apocalyptic Judaism will then re-brand the idea as “Christianity”.)
  4. God never answers the question of why Job suffers. He unsatisfyingly says (in a later interpolation appended to redeem the otherwise atheist-sounding text) to shut up and not complain or wonder about suffering.

What’s obvious to the contemporary reader is that it’s all random.  There is no deity meting out good and bad fortune.  Fortune is fortune.  That’s why we call it fortune.  If life circumstances are coin flips, and the 4th century BCE Judean village is a handful of coins, then some poor villager will end up with a “suspicious” run of 100 tails.  A pre-scientific village practicing immature ancient theism will understandably flail around for supernatural explanations of something that any statistician knows was expected.  The gaps of human understanding in which their god(s) abide will shrink over the next two millennia, until science obviates the psychological drive to posit a creator.  We now have a causal model of the world that accounts for all of the data — including the extreme outliers like Job, whom religion never succeeded in satisfactorily explaining.

Below is a list of 25 types of life tragedy.  How many have hit you?  Simultaneously?  Just one can cause depression.  Most any two officially constitute childhood adversity.  Three or four is plenty for a large-font memoir and motivational speaker gig.  

In the space of 1 ½ years, I experienced nineteen.  Most were clustered in a period of four months.

Simultaneous Life Tragedies Checklist

X Death or disappearance of spouse X Unwanted legal divorce proceeding
X Loss of sibling X Legal bankruptcy proceeding
X Loss of parent(s) X Criminal or civil legal proceedings
O Loss of child O Enslavement or false conviction and incarceration
O Loss of pet X Acute cash flow strain
X Homelessness X Major unrecoverable property loss
X Forced geographic displacement X Complete financial wipe-out ($0 savings+$0 retirement+$0 income+$0 assets+$0 credit capacity+ high nondischargeable liabilities)
X Victim of violence or other crime X Loss of personal safety and security
X Emotional trauma / diagnosed PTSD X Loss of social circle
O Serious physical health problem onset or terminal diagnosis X Unexpected job loss
O Mental illness onset X Permanent loss of career
O Catastrophic injury or permanent disability X Loss of professional network to build new career
X Loss of ability to ever have children    

[Notice that “relationship breakup” doesn’t even make the list.  And “acute cash flow strain” is only on there so that people with mild problems won’t check “complete financial wipe-out”.  Getting dumped by a boy/girlfriend and having trouble paying off your credit card balance are an order of magnitude less traumatic than losing a spouse or being completely wiped out. Failing to see that is a Type I fallacy, see below.]

There are certainly worse things that could have happened to me, which aren’t checked off above.  My floor isn’t by any means the bottom of the pit of human suffering.  The onslaught wasn’t accompanied by a catastrophic accident that permanently disabled me.  Unlike my ex-husband, I don’t have a mental illness.  Though my physical health was adversely affected by the turmoil, I wasn’t diagnosed with an incapacitating or terminal illness.  In the aftermath, I lost the power of free choice in unspeakably soul-crushing ways; but, I wasn’t falsely convicted and incarcerated for a crime.  And, I am not dead. 

Most of all, I saved my beloved puppy.  That has been everything — the fulcrum of restoration, my orienting purpose and incentive to persevere.  Six years on, the social isolation continues, due to my far-from-recovered economic circumstances.  And, as my now-aging dog’s health fails, the remnant heartlessly pulls away.

You who callously and inaccurately relativize others’ suffering by saying that “everyone has problems”… you who flippantly dismiss pleas for help and blame victims in order to maintain the fragile plausibility of your personal narrative of meritocracy…you who pay lip service to lofty liberal activism, but refuse help to a friend facing existential risk at home…you whose preoccupation with one or two personal setbacks displaces your capacity for empathy… You are the “friends” of Job.  

Most people — because they have checked “only” two or three boxes at once and were leveled by it — cling to a worldview that life is supposed to be fair and pleasant.  Theistic language or not, that is what they believe and express.  Their words reflect faith in mean reversion and a vague expectation that overwhelming suffering resolves eventually in compensatory hidden benefits and fairness.

People have four options for response to a friend’s suffering.  (See table below.)  It is a rare person who can acknowledge the non-relativism of unjust suffering around them, and can accept that it just is….and then can join me in finding contentment in such a world (Type IV).  I have learned to avoid people who refuse to acknowledge my experience (Type I), who reductively blame the victim (Type II), or whose price of acknowledging my experience is their unwelcome, projected darkness and stultifying pity (Type III).  Die Sonne scheint noch.

Typology of Responses to Suffering

(Click on table image below to open larger in a new tab)

Typology of Responses to Suffering

The Bronze Age Collapse metaphor extends thusly:  In the power vacuum that was created by armageddon in the early 12th century BCE Mediterranean and Near East, new civilizations took root.  People could no longer make bronze, because copper and tin deposits in the Near East are separated by 2,000 miles.  Such a concentrated, exposed supply chain broke as soon as anarchy cut off the trade route.  So, people turned to harder-to-smelt but readily-available, single-ingredient iron…and then carbon-tainted iron (a.k.a. steel) — which enabled lighter, sharper, stronger objects that greatly extended human power.  Though causality is speculative, we soon got phonetic alphabets, re-imaginations of a transcendent Ultimate, and democracy.  A long half-millennium after a precipitous one-century collapse, global social development measures recovered.

Of course, we can’t know what would have happened without that tablet-wiping catastrophe.  (This goes also for the late 5thc CE fall of Roman Empire, and the 14th c CE halving of European population due to plague.)  The Bronze Age Collapse made warfare ubiquitous, which advantaged the physiologically stronger gender, which invited loss of women’s social status, which we see in archaeological evidence of societies around this time ceasing to feed women meat and no longer burying them with the nifty artifacts that accompany male corpses.  It turns out that women haven’t always been so maligned and oppressed, and gender equality changes haven’t been monotonically positive over the 100,000 years of homo sapiens existence.  It seems that the Bronze Age Collapse may have set back not only social development several centuries but disproportionately knocked women back in a way that takes much longer to recover.  Social status collapse is stickier than economic collapse.

Nonetheless, today’s liberal historicism makes a temporally- and culturally-biased argument that the miserable and bloody 12thc BCE Bronze Age Collapse was, centuries later, ultimately “beneficial” in the imagined “arc” of deterministic history.  And so the theory is that something “better” can also eventually, after some number of years, come out of the 21stc CE evisceration of my own life.

Posted in 5-10 min read, Personal, Religion, Social issues, [All posts] Tagged , , , , , , , |

Nature, nurture, and disease transmission

4 min read

Bipolar disorder afflicts about 3% of people.  Half of them have a bipolar parent.

Therefore, what is the chance that a bipolar parent will produce a bipolar child?

The pendulum of belief about the causes of mental illness has swung from nurture to nature and back to the middle: a non-dualistic view that it’s both nurture and nature.  People used to describe mental illness as a character weakness or choice.  Then, we realized mental illness is highly heritable.  Now, psychiatric science is focused on how environmental factors activate genes.

In the case of schizophrenia, researchers have determined that childhood environmental stress increases the likelihood of manifesting mental illness more than having the gene does.  Childhood environmental stress includes things like abuse, neglect, brain damage, exposure to toxins, cannabis use, bullying, exposure to violence, or death of a parent.  Even having immigrant parents and living in a city are evidently associated with higher incidence and earlier onset timing (though specific causal relationship is unclear).  Genetics is not simple destiny.  Genes have to get turned on. 

That means that backward-looking data about heritability isn’t perfectly predictive of the future in individual cases.  Knowing about his own diagnosis and the mechanism of triggering gene expression, a bipolar parent can put concerted effort into preventing childhood adversity, and thus lowering the risk of “passing it on”.   Over time, the prevalence of mental illness among children of the mentally ill (C) could trend downward in the direction of overall population prevalence (A). 

Psychiatrists considering a bipolar diagnosis always ask if the patient’s parents (or other relatives) have bipolar disorder.  Inviting a patient to reflect on their childhood through that lens can generate an epiphany; it’s often observed that perhaps half of bipolar patients have a bipolar parent.  But…arithmetically, that does not translate to half of bipolar parents having a bipolar child!  You can calculate the actual arithmetic result and play with input assumptions using this calculator. (Note:  “Bipolar parents” means either 1 or 2 bipolar people. This doesn’t distinguish the evidently higher risk from having 2 bipolar parents.)


A __3__ % of individuals are diagnosed with bipolar disorder  (30/1000)

B: __50_ % of diagnosed bipolar individuals have one or two bipolar parents (15/30)

[Interactive calculator forthcoming, where you can change assumptions]

Calculated result:

C: __25_ % probability that a bipolar person’s child will have bipolar disorder (15/59)

D: __2__% probability that a non-bipolar couple’s child will have bipolar disorder (15/941). Note that D < A, because a genetic linkage means that prevalence across all individuals (A) is higher than prevalence only among offspring of non-bipolar parents.

E: _8.5__x higher likelihood that a bipolar person’s child will have bipolar disorder (C/A), compared to if the disease were randomly distributed with no genetic link

  Bipolar not Bipolar  
Parents Bipolar 15 44 59
not Bipolar 15  926 941
    30 970 1000

Personal context:  In 2002, I married a man who was subsequently diagnosed with bipolar disorder.  The diagnosis (and second through seventh opinions to confirm it) was a comfort, as he had “always felt like there was screw loose”.  He was, understandably, terrified of passing on the disease.  In 2007, his immigrant parents wrote him off as a “shame to the family” (for the diagnosis) and “addicted to drugs” (for taking prescription anti-psychotics).  My husband’s mother wasn’t diagnosed, but he felt she had some type of mood disorder; also, his maternal grandfather, who died back in the old country, was described suggestively as an eccentric who had gambled the family fortune away.

So, we planned to adopt a child… as soon as we recovered from being financially wiped out from keeping my unraveling husband alive and functioning in a pre-ACA medical system.  Even our 401ks had to be liquidated to pay for treatment.  I quit my career to manage our chaotic life.  Then, the whole economy collapsed, adding insult to injury.  My conservative parents wrote me off for having “chosen” to make the proverbial marriage bed I was laying in.  Environmental stress skyrocketed; his symptoms got worse.

One spring day in 2011, my husband went off his meds and vanished.  In the disorienting aftermath, I tried to have my eggs frozen, so that no matter what happened in the uncertain future, I’d at least retain the choice to have a biological child. 

The hospital ethics board deliberated as to whether I should be allowed to undergo the surgical procedure, given the possibility that my missing bipolar husband might return and I might have him fertilize the eggs.  But, they realized that refusing me the procedure would constitute a violation of my rights – not to mention that it smacked offensively of eugenics.  So I got the go-ahead.  Nonetheless, the procedure never happened.

The missing bipolar husband resurfaced in a violent manner.  A trial was set for the same day as the procedure.  Both proved to be strictly immovable due to judicial machinations and ovulation timing.  (It wasn’t until a year later that a mounting list adverse coincidences made it no longer paranoid to suggest this scheduling overlap hadn’t been a coincidence.)  My main witness was my semi-estranged mother.  She threatened to sabotage the trial if I had her appear without me there in person.  My second witness was an acquaintance who wouldn’t guarantee he’d take time off work to show up.  (Crossing the brown line of interracial marriage meant certain people refused me sympathy or protection, seeing my suffering as a platform to share essentialist beliefs about how “those people”  act; long-suppressed bias surfaced as relatives suggested I leave a man who had in fact already left me.)

Thus, I faced a Sophie’s choice between my near-term safety and a hypothetical future child.  Fear drove me to the courtroom instead of the hospital.  By the time the doctor reset my reproductive cycle to reschedule the procedure 4 months later, my late-30s ovaries had abruptly and irrevocably slipped off the cliff of fertility.  And, as of this writing 6 years later in 2017, the long shadow of my beloved husband’s mysterious disappearance and violent return are such that I still don’t have a stable job or a new husband, and thus am not qualified to adopt, either.

Today, people callously tell me to be “glad” I don’t have a child — glad because the child could have been bipolar if my husband had come back, and glad because when he didn’t come back I might have opted for the tough road of single motherhood if I had had the eggs.  But choice is a transformative thing that shouldn’t be so flippantly discounted.  Choice makes the difference between sex and rape, between car-camping and homelessness, between sharecropping and slavery, between euthanasia and homicide, and between a couple electing to be child-free and…. a tragedy for which our culture doesn’t even have a word:  a woman being denied the possibility of motherhood due to the actions of others.

Posted in <5 min read, Decision quality, Personal, Sex and relationships, Social issues, [All posts]

Legal originalism

19 min read

Canonization of written law creates three problems over time:  social realities and ethical norms naturally evolve to be incongruous with a static legal code, new behaviors and technologies arise that weren’t contemplated by the original authors, and internal inconsistencies in the text come to light.

The Hebrew Torah (“law”) exemplifies these problems: 

  1. Evolution of norms. The Levitical age of 25 years (Numbers 8:24) was later reduced to 20 years (1 Chronicles 23:24), in order to yield more priests within a decimated post-exilic population.
  2. Unanticipated situations. The Torah provides detailed instructions for designing a mobile tabernacle (Exodus 25-27), but not for building and maintaining a fixed central temple.  In the post-exilic period, Jerusalem temple worship and priestly activities became the centerpoint of Judahite identity, requiring updated instructions (1 Chronicles 21-29 and 2 Chronicles 2-5).
  3. Internal inconsistency. Is Passover meat to be eaten boiled (Deuteronomy 16:7), or only roasted and never boiled (Exodus 12:9)?  The proper technique was later harmonized as roasting (2 Chronicles 35:13).

The five long-lost papyrus scrolls eventually canonized as the Torah were written in the 8th to 6th centuries BCE.  First, the core of Deuteronomy was likely written in mid-8thc BCE Samaria (northern Canaan) and revised in late 7thc BCE Jerusalem (southern Canaan).  In the late 8thc refugee-flooded Assyrian vassal state of Judah, the Book of Genesis strategically combined northern and southern oral traditions to ensure broad resonance.  Next, Leviticus, most of Numbers, and the “priestly” half of Exodus were written in 6thc BCE Babylonian exile. 

Reality in the 4th century Persian vassal state of Judah was quite different.  It took Jerusalem four centuries to rebound from its devastation by the Babylonians in 587 BCE, which had reduced the prosperous capital city of some 20,000 inhabitants to perhaps no more than 1,000.  The 6th century territory of Judah contracted from some 3,000 square miles to around 100, and didn’t resume meaningful territorial expansion until the 2nd century BCE Hasmonean period.  Solidifying Judahite ethno-religious identity in this bleak, post-exilic context was the primary purpose of new scripture.

So, rather than suffice with a legal code from two to four centuries past, priests wrote new guidelines.  The 4th-century BCE Book of Chronicles repeatedly justifies re-interpretation, harmonization and amendment of the sacred Torah with the phrase k’mishpat (“according to tradition”).  The semantic range of the Biblical Hebrew word mishpat includes written laws/regulations/ordinances as well as community interpretation of such rules.  Its meaning encompasses both what is statutory and also what is customary. [1]  In contrast to today’s ontology, law and tradition were not dichotomous. 

Legal hermeneutics

Judicial originalists consider the meaning of a legal document’s original words to be fixed.  They aim to interpret law according to how the words were understood at the time they were written, and without regard the law’s current consequences.  Within the originalist camp, people differ as to whether to consider the writers’ intent (intentionalists/interpretivists/constructionists) or not (textualists/formalists/strict constructionists).  Intentionalists may use supplementary texts to attempt to ascertain intent or they may ascribe intent subjectively.  Textualists believe adequate meaning is extractable from the words on the page. 

Judicial pragmatists (non-originalists/purposivists/loose constructionists) consider precedent and consequences.  They understand the legal document’s writers’ intention to have been varied, contextual, and impure.  People can’t anticipate everything in the future, so the legal code should be viewed as “living”.  Interpretive pragmatism recognizes that there was debate before the ink was dry, and so it’s artificial to long afterward retroject stasis onto something that wasn’t static at its origin, to indulge in nostalgia for a certainty that never existed.  Wishing you had certainty doesn’t give you the right to pretend you have it.

Certain types of legal texts naturally demand an interpretive approach weighted more toward either empathic pragmatism (consideration of the “forest”) or disciplined textualism (focus on the “trees”).  A reader may subscribe to elements of all approaches, depending on text and context.  Collective legal text interpretation by a group is best served by a diversity of individual interpretive approaches. 

Pragmatism, intentionalism, and textualism are unequally-spaced points on a spectrum of faith in interpretive objectivity.  Pragmatists accept that objectivity, though desirable, is illusory.  The 21stc revolution in cognitive science and behavioral economics supports this amply.  Originalists still believe that it’s possible (either with consideration of probable authorial intent, or without, in the more extreme case of textualists).  Post-modernism overturned the Enlightenment’s subject-object dichotomy, recognizing that interpretation changes a text.  We now have a broad, cross-disciplinary understanding that “truth” is shifty, the interpretive lens distorts, pure objectivity is unattainable, and reading a text is an act of imposition more than extraction of meaning.

No serious person occupies either extremity of this spectrum.  Nobody claims such nihilistically complete subjectivity that leaves nothing knowable and everyone’s law books tossed out the window.  And, nobody claims such childishly pure objectivity that they believe to have achieved transtemporal telepathy with deceased authors and thus immunity from socio-political-cultural bias.  However, simplistic slander by each side accuses the other of inhabiting the absurd spectral endpoint.

Jewish non-originalism

Judaism is philosophically non-originalist.  Classical four-part Jewish hermeneutics asks readers to consider much more than the historical-grammatical meaning of a scripture text.  Importance is placed on a text’s unintended allegorical possibilities, tangential parallels with other texts, and mystical associations supernaturally revealed to (i.e., invented by) the reader.  Ascertaining the human author’s original intent isn’t the goal.  Sacred time is circular.  Thus, a reader might legitimately use a later text to inform the meaning of an earlier one, despite authorial dependence going the other direction.  Turn-of-the-millennium Jewish scribes imaginatively repurposed old scriptures as midrash, unhistorically embellishing and expanding stories to make a theological point independent of the original text’s plain meaning.  That gave us, for example, The Book of Jubilees, The Book of Enoch, The Gospel of Matthew, and The Book of Revelation.

We are grateful that the writers of the Hebrew Bible chose to include opposing views, preserve some of the authentic diversity of belief that characterized all periods of Canaanite history, and refrain from overwriting past scriptures when they produced new ones.  We inherit an astonishingly rich text, replete with contradictions that evidence accretive composition over time by different schools of thought. The canonical Hebrew Bible contains strata of 2ndc BCE Hasmonean propaganda, 3rdc Hellenism, 4thc and 5thc Persian Zoroastrian influence, 6thc exilic nationalism, 7thc poly- versus heno-theistic tensions, 8thc pan-Israelite ideology, 9th and 10thc Canaanite oral traditions, and distant memories of earlier Bronze Age experience.  Authority in those times didn’t come from a text being static.  The Hebrew Bible’s redactors believed that precisely because it is sacred, religious law must be updated and adapted to current circumstances to remain relevant. [2] 

There are 233,000 Hebrew words in the Tetrateuch (Genesis, Exodus, Leviticus, Numbers) and Deuteronomistic History (Deuteronomy, Joshua, Judges, Samuel, Kings).  The rest of the Nevi’im (“prophets”) plus the Ketuvim (“writings”) total 395,000 words.  Thus, the Hebrew Bible contains 628,000 words.  The Christian New Testament – most of which is midrash on the Hebrew Bible – contributes 138,000 Greek words of content. [3]  Then, the 3rd – 5th century CE Talmud adds 1.8 million more words of interpretation and analysis.  

Strict originalism with the Torah would have left the Jews endlessly carrying the Ark of the Covenant around in a tented tabernacle, instead of building (and re-building) a centralized temple.  The pragmatic re-interpretation of legal text supported the 7th-century BCE monotheizing centralization of the Yahveh cult in Jerusalem and the 5th-4thc BCE institutionalization of priestly temple worship.  Contemporary Western social and political order is dependent on that legacy.  Without legal text revision and re-interpretation, our “10 commandments” would refer to the 8thc BCE “Ritual Decalogue” of Exodus 34 (firstborn animal sacrifice, wheat harvest festival, no covenants with polytheists, etc), rather than the 7th and 6thc “Ethical Decalogue” of Deuteronomy 5 and Exodus 20 (no murder, no adultery, no lying, etc).

Originalism regarding Biblical law today would be absurd.  We don’t kill every child who swears at its parent.  We don’t morally condemn athletes wearing blended-fiber clothing or military veterans with tattoos.  We are right to now punish rapists instead of rape victims, and to abhor the Old- and New Testament-sanctioned institution of human slavery.  We are right to now embrace the innate homosexual orientation of ~6% of our fellow human beings.  Biblical originalism would leave us without the holidays of Hannukah, Easter, and Christmas.  Christians wouldn’t have the Trinity, the Immaculate Conception, or the penal substitution understanding of atonement.  A majority of contemporary Christian popular songs would be recognized as heretical, in that their theology is inconsistent with the Bible text and derives instead from post-Biblical tradition. 

Conservative Protestant Christianity’s struggle with non-originalism

No matter our individual religious affiliations or lack thereof, we Americans all live under a distinctly Christian “sacred canopy” (sociologist Peter Berger’s 1967 term).  Every society’s sacred canopy is both universal and invisible.  Growing up in America means unconsciously absorbing the Christian cultural and epistemological paradigm – to an extent only somewhat mediated by one’s nuclear family of origin.  

“Judeo-Christian tradition” is an ideologically-loaded 20th century American idea — loaded originally with the pretense of non-exclusion of Judaism, and more recently with a (historically inaccurate) nativist othering of Islam.  In practice, our American “tradition” is Judaic only in that 1st century Judaism was the direct source for almost all of what became Christian theology (with the rest sourced from Hellenistic philosophy and mystery cults).  Indeed, the hermeneutic of the 1st-2ndc CE Jews who wrote the Christian scriptures was identical to that of the 2ndc BCE – 1stc CE Jews who contemporaneously wrote the Hebrew Ketuvim and apocrypha.  (Moreover, their theology was also so congruent that the classification of some period scriptures as “Christian” versus “Jewish” is debatable.)  However, the subsequent exegetical traditions diverged.  

Exegetical praxis in Judaism has remained explicitly non-originalist over the millennia.  Somewhat similarly, Catholic and Orthodox Christianity emphasizes post-Biblical convention and patristic mediation of “living” scripture.  Mainstream liberal Protestant Christianity embraces a “higher” critical perspective and thus adaptability to scholarly revelation about its sacred text.  However, conservative Protestant Christianity stands alone in unqualified opposition to pragmatic interpretation of scripture.  

Because of their position against Biblical non-originalism, the Protestant literalists have particular trouble with judicial non-originalism.  Protestant Biblical literalists account for just 15% of global Christians.  But it is their disproportionately loud voices that frame popular American political and philosophical discourse…and influence secular legal hermeneutics. 

In our Christian-influenced culture, authority comes from stasis.  Hence, ex-Christian anti-theists delight in mocking the Bible for how profoundly its message changed over time.  They are correct that the Bible is not inerrant.  It’s full of orthographic errors, copyist omissions and rogue scribal glosses, anachronisms, externally-attested historical factual errors, mis-citations of other scripture, words nobody today knows the meaning of, pseudepigraphs, plot holes, unmarked interpolations, and unresolvably conflicting non-original manuscript fragments…as well as substantive internal contradictions where the same event or topic is addressed more than once.  However, regarding the contradictions, the mockers are anachronistically applying their 21st-century interpretive paradigm to 1st-2ndc CE (Christian New Testament) and 8th-2ndc BCE (Hebrew Bible) texts.  Despite their reasoning being critically unsound, plenty of contemporary Christians have de-converted in response to noticing Biblical contradictions.  For the same reason, conservative Christians are hostile to scholarly high criticism and engage in convoluted harmonizations (“apologetics”) to preserve the illusion of consistency.  And, it’s that same normative reverence for stasis that threatens politicians with ridicule if they modify a policy stance based on new information.  Being labeled inconsistent is a cutting insult in our culture.

Although Christian fundamentalists have difficulty embracing a non-originalist interpretative orientation, they may themselves be theologically non-originalist.  Some avow a docetic Christology: Jesus was a pre-existing divine being who only appeared to be human but wasn’t.  For example, I recently endured a Brazilian Pentecostal’s tirade when, in passing, I referred to Jesus as a “person”.  However, docetists like my Brazilian friend were called “anti-Christs” in the canonical Epistles of John (~100-120 CE), their celebrity proponent Marcion-of-Sinope was excommunicated in 144 CE, and docetism was officially condemned as heretical at the 325 CE Council of Nicaea. [4] 

As a practical matter, Biblical originalism is impeded by the fact that we don’t have the original manuscripts.  Academic specialists use advanced critical methods to hypothetically reconstruct what the original likely said.  But, they don’t all agree which of the extant manuscripts is closest to the lost original for any given corrupted passage.  Moreover, because of the internal contradictions and lack of systematic theology in either the Hebrew Bible or Christian New Testament, there is a wide range of scripturally accurate “original” belief.  Technically, being a Biblical literalist Christian doesn’t require belief that Jesus existed, that he was divine, or that he was a predicted Messiah.

“Original Christianity” faithful to Jesus’ own religion could mean following the 613 stipulations of Mosaic law: male circumcision, eschewing bacon and lobster, interest-free lending, premarital sexual abstinence, no worship during menstruation or a week after ejaculation, no work on Saturdays, no neutering of puppies or cross-breeding of livestock, etc.  Nonetheless, their unwittingly non-originalist reading of the Bible can lead nominally originalist Christians to reject not only antiquated elements of the Levitical holiness code, but the Old Testament in its entirety.  Such Paul-centric, quasi-Gnostic, anti-Semitic “Marcionism” was condemned as heresy in the 2ndc CE when it arose.  Still, modern-day Marcionites exist – and, ironically, are known to selectively quote from the Old Testament to argue from implied authority (with further irony of ascribing certainty and relevance to a contradictory, composite text).

This evidences the core problem with originalism in any domain:  it’s usually about convenience, not consistency.  Originalists are liable to make disingenuous arguments.  For example, the Hebrew Bible is overwhelmingly clear that we are to proactively care for the poor, and the Christian New Testament advocates plainly socialist redistributive policies.  Yet so-called literalists are typically hostile to social welfare programs and critical of, rather than compassionate toward, the poor.  Similarly, the Bible has nothing to say about reproductive rights; yet, so-called literalists invoke the Bible in campaigning to restrict women’s dominion over our own bodies.  Originalists choose which passages to privilege and then must explain away or ignore other inconveniently contradictory passages.  “Anything in the Bible that is not literally true must be an allegory, because the Bible is always literally true” goes the apologist’s unapologetically circular argument. 

Constitutional hermeneutics

Tuesday April 18, 2017, was Neil Gorsuch’s first day on the United States Supreme Court.  The self-described originalist caricatured originalism with his monothematic interjections: “Look at the plain wording.” “Just read the words.”  Other justices, including originalists, countered that it’s not so simple.  Journalists noted Gorsuch’s “shots across the bow” as pre-emptive assertions of rigid originalism to come.  On Day One, his “just the text” catechism is already tiresome. [5] 

Textualism privileges words over their consequences, maintaining that there’s no subjective act of interpretation occurring.  But language is inherently ambiguous and thus interpretation is necessary.  For example:

  • Do “privileges and immunities” refer to citizens’ rights at the local, state, and/or federal level, and/or rights derived from natural law and/or by custom, and are they substantive and/or procedural rights?
  • Are white women and non-white people “persons” deserving “equal treatment”? Apparently not, as it took an amendment two years later to give black men the right to vote, and another amendment fifty years after that to give women the right to vote.  Yet, according to tradition (k’mishpat), we now read “person” as encompassing all human beings.  
  • What did “cruel”, “unreasonable”, “probable”, and “liberty” mean in 18thc colonial American English? The document is full of aspirationally vague words with indeterminate meaning, necessitating a value judgment by later readers.  
  • Do wiretapping and drone surveillance constitute a “search”? Does revenge porn posted online constitute “speech”?  “Dead document” textual originalism can become risible.  And, when uniformly applied, it doesn’t always lead to a politically conservative outcome. 

For these reasons, Constitutional textualists are — like Bible proof-texters – notoriously selective with textual evidence.  Whether its proponents are failing to offset subconscious ideological confirmation bias, or are consciously and disingenuously fitting text to personal ideology, textualism is objectively not the objective approach it purports to be.  Textualist Supreme Court justices’ departures from textualism have been well documented, and we expect the same selective hermeneutical approach from Gorsuch.  After all, the court doesn’t keep a scholar of 18thc American English linguistics on call to inform its deliberations.  [6]

Just as the Bible is not inerrant, so neither is the United States Constitution.  Since the framers knew they couldn’t predict the future, they resisted including specific policies – with the exceptions of income tax policy added via amendment, and alcohol policy added and then subtracted via amendment.  Even so, the document made (and eventually corrected) what we now accept was an egregious moral error regarding the specific policy of slavery.  The Constitution is a short document of 7,591 words (compared to 4,200 in this essay), and in it the framers were smart to mostly remain general and abstract.  However, that well-intentioned textual magnanimity translates into greater subjectivity in interpretation later on.  This is parallels how the Bible’s parabolic abstractions and poetically non-specific language (plus errors, contradictions, non-systematic message, and composite structure) forces subjectivity in its readers.  By their natures, both texts provoke endless interpretive controversy.

Object informs interpretation.  Though most texts don’t announce what hermeneutic to use in interpreting them, both the Bible and the US Constitution helpfully do so.  The 9th amendment is famously troubling to constitutional textualists, in that it recommends non-originalism.  It states that failure to specify a right in the Constitution shouldn’t translate into restricting that right.  The document knows it isn’t comprehensive.  In addition to being unfeasible in practice, strict “originalism” is both non-Biblical and non-Constitutional.

Law has purpose.  Interpretation should further the law’s purpose — as distinct from the law-writers’ intent, and as distinct from imagining how the law-writers would have solved a problem of today that they never could have contemplated.  The purpose of the United States Constitution is stated up front: “in order to form a more perfect union, establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity.”  Fidelity to purpose requires adaptation to match changing social and ethical norms.

Faithfulness to the overall purpose of a document also requires reading it as a unified whole, rather than as isolated passages.  This is considered best practice for legal documents as well as religious scripture.  For example:  

  • The ~700 BCE text of Isaiah 7:14 depicts a court prophet in 734 BCE explaining to King Ahaz of Judah that a young woman having conceived is a comforting divine sign that the Israel-Aram coalition against Judah will fail. If one digests the Book of Isaiah in its entirety, and together with the Deuteronomistic History from which it copies some content, its multi-century historical context and anthology structure is apparent.  However, reading it in decontextualized excerpts, many have claimed that the words predict Jesus.  The interpretive fallacy is aided by the 3rdc BCE translators’ choice — either careless, ignorant, or ideological — to (a) change a present perfect Hebrew verb to a future tense Greek verb and (b) replace the specific Hebrew word for “young woman” (almah) with an imprecise Greek word that can mean either “young woman” or “virgin” (parthenos).    
  • The 2nd Amendment of 1791 states its purpose as “the security of a free state”, and thus the right to bear arms as conditional. In the overall context of the Constitution, the potential need for a militia to effectuate that security was due to the new republic’s lack of a standing army.  In an era that also lacked local police forces, domestic tranquility got some help from private citizens with muskets that fired 2 times per minute.  Today, domestic tranquility is harmed by easy access to guns firing 600 times per minute.  Weapon technology and social reality have changed in unanticipated ways, demanding updated common sense interpretation of how to achieve the 2nd amendment’s purpose.  Is this an oversimplification…or is it actually the “plain wording”?  [7]  Honest originalism would restrict the right to bear arms today. 

Intentionalism is less extreme than textualism, but leads to its own thorny problems.  In interpreting some legal texts (e.g., contracts and wills), the intent of the writer has significant merit for interpretation.  Less so for the Constitution.  The purpose of our Constitution is not to further the disparate and unstated intentions of 39 upper-class, East coast, Anglo-Saxon, Protestant,  heterosexual, slave-owning, pre-industrial, 18th-century male lawyers. [8]  For example, it is clear from extra-textual evidence that the intention of the 14th Amendment in 1868 was definitely not to prohibit racial segregation; yet, we’ve (thankfully) interpreted it as such ever since 1954.  A Constitution interpreted strictly in light of authorial intent — whether imputed or extra-textually researched — doesn’t necessarily provide the best foundation for a more perfect union, in that it may not remain consistent with the Constitution’s own purpose: the welfare and liberty of contemporary citizens.  The temporal and cultural distance between 1787 and 2017 is vast.  As has often been asserted, the Constitution should be wiser than its writers. 

Humans are human because we can transmit valuable learnings across generations.  This is a feature only known in hominims. [9]  Tool manufacturing started 3.3 million years ago with hominims (interestingly, among primates outside our own genus homo).  Today, intentional tool-making and its corollary of high-fidelity knowledge transmission isn’t seen in any living creatures other than homo sapiens.  We humans use tools to make other tools, rather than just appropriate found objects as tools.  Crucially, we then teach what we learn, so our children don’t have to start over knapping stones into axeheads and re-inventing the wheel every generation.  The fundamental fallacy of strict legal originalism is that it repudiates the wondrous capacity of humanity to learn, accrue wisdom and become more efficient, moral, prosperous, cooperative, and healthy over time.  When it comes to Constitutional law, we can do so much better than to “just read the words”.  

April 2017



[1]  One example of its statutory usage: In the Book of Deuteronomy, Moses recapitulates mishpat, which we now refer to as “the ten commandments”. 

[2] The Christian New Testament continues the Hebrew scripture practice of incorporating opposing belief traditions without harmonization.  It teems with theological contradictions and incomplete articulations of core doctrinal points.  A systematic Christian theology only coalesced two centuries after the last Biblical scriptures were written.  Hence, the need for so much textual harmonization today among people who can’t tolerate that contradictory diversity. 

[3]  For example, 70% of the Book of Revelation’s 404 verses are decontextualized Hebrew scripture references; and much of the remaining 30% is borrowed from extracanonical Jewish apocalyptic writings and Babylonian mythology.  Similarly, 20% of the Gospel of Matthew’s 1,071 verses are Hebrew scripture citations and references; and much of the remaining 80% are scriptural plot parallels, plus ideas from Greek Cynicism philosophy.  More broadly, none of the Old Testament passages that Christians now read as predictions of Jesus actually refer to a future Messiah at all.  Rather than being ignorant or deceptive, the writers of the Gospels and the Book of Revelation were engaged in the then-common practice of pesher – a sub-type of midrash that divorces old scriptures from their original intended meaning and creatively repurposes them to explicate current events.  However, in short order, rapidly-expanding early Christianity forgot its own original hermeneutic.  For the next 17 centuries, the wider Gentile world mistakenly read Christian scripture as historical rather than midrashic.

[4] The Christologies of Jesus’ posthumous followers included docetism, separationism, incarnation, adoptionism, as well as the belief that Jesus wasn’t divine.  All but incarnation were eventually declared heretical.  The canonical synoptic gospels reflect resurrectionist adoptionist and baptismal adoptionist Christology – another example of intentional preservation of contradictory and heterdox belief traditions. 

[5]  Gorsuch was raised Catholic and now attends a liberal Episcopalian church – an example of the imperfect correlation of nominal religious affiliation with stance on judicial hermeneutics.  To the extent that exegetical paradigms influence legal interpretive paradigms, they are defined more by the sacred/cultural canopy than by individual worship community membership.

[6]  The long deliberations often dive into mind-numbing semantic minutiae.  Recall that President Bill Clinton was excoriated by the media in 1998 for caveating acknowledgement of an extramarital affair based on “what the meaning of ‘is’ is”.  This was plausibly earnest logic from a cerebral law school graduate who knows that significant constitutional decisions have hinged on the meaning of a single word.

[7] The accusation of oversimplification is used to silence commentary on certain topics, including constitutional hermeneutics, by laypeople.  However, any topic can be addressed in a paragraph, an essay, a book, or an entire shelf of texts.  It’s a self-serving cognitive bias to believe one’s own professional domain uniquely defies summarization and explication.  Constitutional law is not a mystery beyond the comprehension of those who didn’t attend law school.  Justices:constitution::clergy:bible.  Both law and religion benefit from thoughtful disintermediation.

[8]  Although the signers were nominally Christian Protestants, some held non-Trinitarian and Deist beliefs that most Christians today would label “non-Christian”.

[9]  “Hominim” is a taxonomic sub-tribe of the Great Apes sub-family, in the Anthropoids sub-order of the Primate Order.  It comprises the genus homo with its multiple extinct sub-species and one extant species sapiens, plus other genera of pre-human primates.

Posted in >10 min read, Religion, Social issues, [All posts] Tagged , , , , , , , |

Paternalism, autonomy, and milk

8 min read

Progressive-minded people love to ridicule the internal inconsistency of right-wing social conservatives.  Conservatives’ defining values are limited government, individual liberty, and privacy…but one of their core goals is federal government control over private medical decisions.  They deify individual choice and (the myth of) meritocracy…but are also unapologetically anti-choice, giddily dooming poor pregnant women and teenage girls to lives where structural poverty obliterates individual merit.

Yet, the socially progressive worldview also contains a core contradiction:  Understanding that meritocracy is a myth and poor people are not at fault for their situations…but believing ardently in the need for governmental control over poor people’s behavior.  Progressives recognize the humanity of the less fortunate.  They lament poor people’s limited autonomy in the face of widespread discrimination, low wages, environmental and legal injustice, low-quality education, and community violence…but they further limit personal autonomy via paternalistic welfare programs.

Recently, I opined to a fellow bleeding-heart liberal about the wisdom of cash payments to poor people in lieu of behavioral regulation via in-kind welfare.  The topic has been abundantly-researched and well-documented in the media: Unconditional cash payments to poor people are “surprisingly” more effective.  Since gaining attention some two decades ago, the idea has spread around the globe.

My companion disagreed.  When she worked checkout at a grocery store long ago, people on food stamps who had a grocery bill higher than their food stamp card balance would routinely tell her to “put back the milk”.  She glanced at me with a horrified facial expression, seeking commiseration.

But, I’m the opposite of horrified.  I can easily think of numerous rational reasons to “put back the milk” in favor of boxes of processed food, beginning with this:

We heavily subsidize a food that 45% of food stamp recipients can’t even digest.  (Lactose intolerance affects approximately 75% of African-Americans, 50% of Hispanics, 100% of East Asians and Native Americans, and 20% of Anglos.  Food stamp recipients are 26% African-American, 10% Hispanic, 4% East Asian and Native American, 40% white, and 20% unknown ethnicity.)  Lactose intolerance causes intestinal pain, diarrhea, bloating, and vomiting.  Many people don’t know they are lactose intolerant but may have a subconscious aversion to milk, drinking it medicinally because, in the swamp of endlessly-conflicting nutritional advice, many voices insist it’s good for you.  Years ago, my Asian husband suffered intractable Irritable Bowel Syndrome for which he took medication and sometimes skipped work…until it dawned on us that my dairy-centric meal plan was making him sick.

Rich white people wrote the economic rules.  And, they wrote them from their non-universal perspective.  Bias is invisible to those that hold it.  Even as Anglo-America’s age-old faith in the virtuousness of milk has lately been shaken by science, those of us upon whose distant ancestors evolution bestowed the lactase enzyme will be the last to fold.  Debate over continued government subsidy of the dairy industry reveals how milk takes on quasi moral significance in our culture.

Other valid reasons to put back the milk:

  • Kids in school. All children of parents on food stamps automatically get free milk at school, via the USDA’s 80-year-old Special Milk Program (other kids pay $0.25 to $0.65 per half pint).  So, putting the milk back doesn’t mean denying nutrition to kids – at least on weekdays during the school year.  And, for adults, the health benefit of cow milk over vegetables is no longer supported by science. 
  • Spoils quickly. Because milk goes bad quickly and poor people fastidiously avoid food waste, they may only buy milk when they’re confident that in-town schedules guarantee it’ll get used up quickly.  Similar to how financial credit serves to smooth income, and layers of exquisite knotted carpets in an otherwise bare Bedouin tent serve to store wealth, so packaged food serves to smooth food consumption.  When my next payday is uncertain, I’m financially and nutritionally better off with calories stored as non-perishable food in the cupboard.
  • Not filling. When food is scarce, it’s rational to buy food with the highest satiety index per dollar (usually also the highest calories per dollar).  Like it or not, that means cheap carbs.  We should fix food insecurity, and fix the systemic issues making processed carbs cheap – not expect hungry people to behave irrationally, or to take a long-horizon view of nutrition when they’re trapped in a short-horizon survival game. 
  • Alternate source.  If I check out with eggs, flour, chocolate, and butter, the grocery store clerk will often remark, “You forgot the sugar!” (for the brownies I must surely be making tonight).  Cognitive bias blinds him to the possibility that I already have sugar at home.  Similarly, putting milk back may mean there’s already milk in the fridge that should be finished first in order to not waste money.  It may mean that cheaper milk can be gotten at a food bank, so food stamps are better used for non-milk items.  Or, it may mean that another family member with their own funds can get the milk.   

The point is that if you don’t conduct an in-depth interview, you cannot reasonably judge such a decision.  If you’re a social progressive, your overarching philosophy demands that you respect the decision-making capacity of individuals and recognize that nobody wants to be hungry or unhealthy.  We should all cultivate “sonder” – a neologism defined by the Dictionary of Obscure Sorrows as “the realization that each random passerby is living a life as vivid and complex as your own—populated with their own ambitions, friends, routines, worries and inherited craziness—an epic story that continues invisibly around you like an anthill sprawling deep underground, with elaborate passageways to thousands of other lives that you’ll never know existed, in which you might appear only once, as an extra sipping coffee in the background, as a blur of traffic passing on the highway, as a lighted window at dusk.”  Arms-length judgment and condescension have no place in the liberal person’s worldview.

Cash payments work better than in-kind welfare services.  They are cheaper to administer and more effective at alleviating poverty.  The body of research that showed this is, incidentally, the ground that supports the “universal basic income” idea – a new policy darling of the left.  Paying people a subsistence income frees them from taking the first dead-end job that comes along, and it enables them to make economically efficient investments in more productive activities.  Advocates believe (supported by ample evidence) that most recipients wouldn’t use the income to sit around idly.  So, we see that left-wing progressives’ inability to transfer conclusions from one domain (cash payments labeled “universal basic income”) to another (cash payments labeled “welfare”) echoes that same silo problem among right-wing ideologues.

A well-meaning friend bought me $40 of groceries.  He chose what items to buy, paternalistically believing he knows what’s best.  But if I had control of $40 in cash, I would have used it at a store where it went further, and bought foods that work best in my body and in small quantities that I have no chance of wasting given what’s already sitting in my fridge.  I would have kept some of it to fund my dying dog’s pain medication.  Still, I am grateful for the gesture and the food (including a delicious quart of milk, which I happily drink with a grateful nod to my Germanic cow-herding ancestors).  My friend’s understandable desire for his charity money to be used efficiently translated into the money being used somewhat inefficiently.  Without walking in a poor person’s shoes, you simply cannot know what is best.

Emergency food stamp benefits of $194 per month can only be used in grocery stores and farmer’s markets.  With $150 free cash instead, I could get my car approved for carshare service – i.e., craft my own proverbial fishing pole, rather than depend on handouts of fish.  This inefficiency of in-kind payments is what leads to people standing idly outside of low-income grocery stores, offering to buy groceries with food stamps in exchange for cash.  Conditional welfare payments create economically inefficient gray markets.  Participants lose time, money, dignity, and autonomy.

If I don’t make it out of the black hole of poverty, I will likely spend my last $5 on coffee rather than a final meal – buying a few crucial hours of hunger abatement and mental focus, and thus a last-ditch chance to happen upon a fishing pole.  To a thoughtless and callously paternalistic observer, that dogged resourcefulness might look like a stupid choice.

Another friend recounts driving past an apparently-homeless man smoking a cigarette.  My friend viciously rails about how stupid poor people are, “wasting money on cigarettes that cost $12 a pack”.  I rail back at how unimaginative he is, presuming that anyone smoking a cigarette purchased it and did so without a valid reason.  The man could have been gifted the cigarette by a passer-by, cleverly bartered for it, or bought it loose for next to nothing.  If he bought it, it could be a once-in-forever indulgence, a substitute for medication he can’t afford, or a strategic means of managing gnawing hunger.  Until you ask, you don’t know.  The wise person – and the person whose political identity rests on belief in the complex and structural drivers of poverty and the recognition that there are hapless geniuses stuck in homeless shelters and lucky idiots inhabiting corporate board rooms – must reserve judgment.

None of this should feel so unrelatable.  Nearly half of Americans will apply for a means-tested welfare program at some point in our lives.  Of course, because of the stigma, many of us don’t know for sure who among our acquaintances and colleagues has done so.  But a reasonable level of observational awareness would make it evident.

The medical field practices “paternalism in the name of autonomy”.  By definition, illness involves some diminished capacity.  Paternalistic medical treatments aim to restore patients’ autonomy.  Admirably, physicians hand-wring over this issue, uncomfortable with the possible excesses of paternalism while keenly focused on their mission to preserve and restore autonomy.  Social policy advocates should take a page from medical ethics.  It would be wise to frame paternalism in social welfare as a means to an autonomous end – an ethically-challenging means to be employed only in limited circumstances and with great caution.

Posted in 5-10 min read, Food, Social issues, [All posts]

How the data revolution makes universal health insurance inevitable

14 min read

The limit of U.S. healthcare policy as the data revolution progresses is single-payer, government-provided, universal health insurance. 

Meaning?  Health care delivered by assorted private, public, and not-for-profit medical facilities — just like it is today.  Health insurance financed by the federal government and given to everyone.  a.k.a. “Medicare for all”.  a.k.a. The health insurance system that U.S. Congresspeople enjoy today.

Why?   Because predictive analytics increasingly empowers private payers to identify high-risk customers…even before they become high-cost patients.  And, because a complex hybrid public-private health insurance system leaves private payers with opportunity to selectively excise those future-high-cost patients from their customer rolls.

Data predicts medical problems 

Medical data has existed as long as societies have had professional medical providers.  Predictive analytics has been around since the 17th-century invention of inferential statistics and the 20th-century invention of regression analysis.  Predictive analytics was later made less time-consuming by software (Excel, SPSS, SAS) that automates the statistical math – and does so in an interface that frees the user from needing to understand the underlying arithmetic. 

What’s different now is that we have more voluminous medical data from more sources: data from wearable devices, digitized text chart notes, electronic medical histories, patient-reported information in apps, real-time vital sign monitoring data, lab results, genetics test results, and published medical research insights.  We also have cloud-based IT systems to compile those disparate data types into a centrally-accessible and analyzable format.  Lastly, we also have more advanced analytics software, which uses data mining, qualitative text analytics, and machine learning techniques, in addition to standard statistical modeling. 

Big data- and technology-enabled algorithms deployed today “predict” (i.e., assess the risk of) sepsis, heart failure, stroke, timing to safely remove artificial ventilation, diabetes complications, need for ICU versus standard hospital admission, bleeding during surgery, need for elective infant delivery, and many other medical outcomes.  Such “prescriptive analytics” enable higher-quality medical decision-making and early intervention to lower the chance of adverse outcomes.

In addition to data proliferation and analytics evolution, there are transformative medical innovations in the pipeline: 

  • Things that improve health outcomes, but make care more expensive: advanced prosthetics, 3D printed organs, robotic home care, VR-assisted surgery.  Private insurers have an incentive to charge more for people likely to need such treatments.  How actionable that incentive is depends on health insurance regulatory policy.
  • Things that make care cheaper due to early detection, but identify unhealthy people in so doing: nanopills that detect early signs of cancer and heart disease from inside the body, wearables like contact lenses with streaming data to detect glucose levels or other leading indicators of disease, full-body multi-functional radiologic diagnostic scans. How will private insurers value the avoided costs from early detection compared to the non-zero cost of long-term treatment?
  • Things that cure diseases that would otherwise become expensive, but at a high upfront cost: gene therapy, gene editing, “exercise in a pill”.  How will private insurers balance known upfront costs against unknown future actual or avoided expenses?

Among all the growing sources of data, payers only have access to patient demographics and individual claims data (procedures, medications, and lab tests ordered), plus published research insights (which link demographics and treatments with outcomes).  In contrast, providers have all of that, plus all data any provider generates about a patient (e.g., lab test results) and patient-reported data.  The reach of payer predictive analytics is, thankfully, bounded by America’s strict medical privacy laws.  However, payers are able to extract ever more predictive value from the increasing volume of data they do have.  The power of inference increases as data accumulates.

Health insurance companies are rational 

A fundamental premise of the Affordable Care Act (ACA) is non-discrimination based on medical condition.  Yet, in late 2016, the outgoing Obama administration allowed a federal agency to issue a regulatory rule that effectively allows private insurers to excise its sickest patients. 

Even a simple explanation of the background for this surprising action is necessarily lengthy, but it’s worth pausing to understand.  The ACA champion’s anti-ACA rule affects dialysis patients, as follows:

      America’s 700,000 patients with End Stage Renal Disease (ESRD) often can’t work, since they sit for hours-long life-saving dialysis sessions several times per week…forever (unless they get to the top of the 100,000-person-long national kidney transplant list).  Still, about 60% don’t financially qualify for Medicaid (depending on individual state policy regarding ACA-related Medicaid expansion).  All ESRD patients are eligible for Medicare (it’s the only diagnosis-linked eligibility group), but those under 65 aren’t eligible for the Medicare gap coverage (depending on state policy) that makes Medicare affordable. 
       Meanwhile, dialysis providers earn more from patients enrolled in private insurance plans than those on Medicare/Medicaid.  Therefore, “charitable premium assistance” from dialysis providers to ESRD patients is widespread (though somewhat masked to private insurers by using not-for-profit intermediaries).  For example, a provider pays ~$6000 in annual private insurance premiums on behalf of a patient, in exchange for ~$100,000 in incremental annual payouts from the private insurer: charity that yields a 1500% financial return on investment.
       Historically, 90% of ESRD patients use Medicare and 10% use private insurance.  This is a one-time, very weighty decision that frightened patients must make upon initial diagnosis because one of the ACA’s consumer protections is a prohibition on selling private insurance to Medicare recipients.  Given that the ACA has successfully made private insurance more affordable with capped annual out-of-pocket expenses, the privately-insured ESRD segment will grow organically (if the ACA indeed remains law).  In addition, dialysis providers have a strong financial motivation — and thus a potential conflict of interest with Hippocratic obligations — to steer patients into private insurance.
       One of the ACA’s notoriously-numerous loopholes is that it’s legal for private insurers to void coverage for patients receiving charitable premium assistance (unless they have HIV/AIDS) and to reject Medicare-eligible patients.  Private insurers have only sporadically followed this practice to date, but are getting more aggressive as their own financial pressures mount.  A bipartisan House bill to address the issue failed in 2015.
       The Center for Medicare and Medicaid Services (CMS) has authority over healthcare provider facilities that treat Medicare/Medicaid patients (i.e., almost all of them).  A December 2016 proposed CMS rule forces healthcare providers to inform private insurers if they’re making charitable premium payments – thereby facilitating private insurers’ expansion of the ACA-sanctioned practice of excising costly ESRD patients from their risk pools.
       The ACA has made private insurance preferable to Medicare for many of the ~50% of ESRD patients who are both under 65 and also Medicaid-ineligible.  However, Medicare is preferable for at least the other half.  Why?  ESRD patients on Medicare don’t risk: (a) coverage interruption due to their insurer voiding coverage upon discovering charitable premium assistance, (b) coverage interruption due to provider cessation of charitable assistance post-transplant (when patients still need a lifetime of costly immunosuppressant drugs), (c) de-prioritization on the national kidney transplant waiting list due to anticipated coverage interruption, or (d) high residual payments for the 65% of their healthcare spending that is unrelated to ESRD (if they’re under 65 and thus can’t get Medicare gap coverage).
       Rather than making it easier for private insurers to excise high-cost, chronically ill patients, a more humane and financially efficient solution is:
 (1)  Legislatively close the ACA loophole that legalizes denying private insurance to patients receiving charitable premium assistance and to Medicaid-eligible patients.
 (2)  Legislatively makes Medicare gap coverage eligibility congruent with Medicare eligibility
 (3)  Eliminate the existing CMS rule that discourages private insurers from accepting charitable premium assistance.
 (4)  Cancel the December 2016 proposed CMS rule that mandates charitable premium assistance disclosure. [Overturned by a federal court in January 2017]
 (5)  Mitigate provider conflict of interest by mandating patient-specific analysis of the private vs Medicare/Medicaid insurance decision, and by reinforcing provider ethics in favor of patients’ interests over financial gain. [Completed by not-for-profit intermediaries, effective January 2017]
 (6) Improve the Medicare enrollment process for privately-insured, post-transplant ESRD patients to prevent kidney transplant disqualification by eliminating risk of coverage interruption. (This could take the form of opt-out automatic enrollment, better provider-insurer coordination, and/or a transplant center-led program.)
 (7)  Increase Medicare payouts for dialysis to cover actual provider costs, to reduce financial incentive to steer patients into private insurance.

Imagine that, instead of all this tediously-complex, economically-inefficient horse-trading of patients between Medicare and private insurance, we enact “Medicare for all”.  

In the above ESRD example, Obama didn’t write the offending CMS rule; but, the executive branch oversees the CMS, and thus at least tacitly approved the proposed rule.  So, the guy whose proudest presidential legacy is the ACA capitulated to weakening it.  That’s evidence of how powerful the systemic incentives are to cherry-pick patients based on expected medical cost.  Moreover, those incentives aren’t static – they will strengthen further as the cost of care delivery inexorably rises.  (The cost of healthcare delivery is a serious, independent issue which influences the health insurance crisis and thus must be addressed in parallel.)

It’s folly to expect private companies to self-limit.  The defining objective of a corporation is profit-maximization.  To behave otherwise, companies must explicitly bake a social mission into their decision-making framework (e.g., by using a triple bottom line, lower discount rate, more comprehensive risk assessments).  Water runs downhill.  Such is the argument in favor of market regulation to preserve the public good.  Someone got a Nobel Prize in Economics for explaining this (Jean Tirole, 2014): regulation is necessary for a well-functioning market, and effective regulation is necessarily complex and nuanced, adaptive over time, and customized to each industry.

The combination of private provision of health insurance with predictive data led to a classic market failure:  Quality fell (insurance policies covering less stuff).  Quantity fell (fewer people had health insurance).  Someone got a Nobel for addressing that, too (Daniel Kahneman, 2002):  regulation of insurance product content and regulatorily-automated insurance enrollment are necessary to address bounded rationality and bounded willpower of consumers.  Hence, the ACA.  And, hence, the improvements still needed to perfect the ACA.

Predictive data power + Private profit motive = Deepening healthcare crisis   

Data is getting more powerful.  Incentives are strong.  Data will be used with ever-increasing effectiveness to follow those ever-strengthening incentives, i.e., to reduce coverage of the sickest people.

We’ve seen this play out over decades, with the American health insurance system becoming more and more unfair:  pre-existing condition exclusions, reproductive and maternity care de-standardization, horror stories of untreated people dying of treatable conditions in the world’s richest country.  The patient horror stories are a leading market indicator, akin to oil prices spiking in the run-up to reserves one day reaching depletion, new realtor license applications surging before an asset price bubble bursts, and the proverbial writing on the wall before Babylon fell. 

This downward trend has fueled 20 years of attempted health insurance system reform, culminating in the ACA. 

Band-aid solutions

  1. 1993: First Lady Hillary Clinton leads a task force to identify a solution to the healthcare crisis.  Proposes mandatory universal private health insurance coverage, with government-subsidized premiums for low-income individuals.   Outcome:  Conservatives deny there’s a healthcare crisis and complain this solution is too complex.  Liberals say single-payer universal health insurance (i.e., “Medicare for all”) would be better.  Nothing is implemented.
  2. 2006: Massachusetts Republican Governor Mitt Romney enacts mandatory universal private health insurance coverage, with government-subsidized premiums for low-income individuals.  Outcome:  Uninsured Massachusetts population drops from 10% to 2%.  Continues in force today, with no meaningful opposition.
  3. 2010: President Barack Obama copies RomneyCare, enacting mandatory universal private health insurance coverage, with government-subsidized premiums for low-income individuals.  Outcome:  Conservatives abhor the mandate, but have no counter-proposal.  Liberals believe single-payer universal health insurance would be better, but that it’s unrealistic politically.  Implemented in 2014 after judicial challenges.  Uninsured U.S. population drops from 18% to 10%. Widespread popular support.

Note that, each time, we landed on the same solution…regardless of political party.

The ACA is a valiant effort that has meaningfully improved millions of lives.  Obama was right to consider it a great legacy.  But, it’s a losing battle against the forces of rational monetary incentive among private companies.  The ACA is a makeshift dam.  Water runs downhill. 

The ACA’s imperfection is that it needs to be more comprehensive and absolute.  In economic terms, it’s an “incomplete contract”.  If government is to leave provision of health insurance in private hands, it must specify product quality (i.e., the list of minimum coverage requirements for exchange-qualified health insurance plans) and quantity (i.e., insurance available to everyone regardless of pre-existing conditions, age, occupation status, etc.).  Economists also note that price caps on policies (i.e., max annual total cost to policyholder of premiums + deductible + copays) are inefficient because of informational asymmetry between government regulators and private insurers (Joseph Stiglitz, 2001 Nobel).   

Partisan wrangling to pass the landmark legislation left loopholes for private insurers to excise high-risk people.  They can provide limited coverage of medications used by the chronically ill to make plans unappealing to them (not prohibited by the ACA), charge higher premiums for the elderly (allowed by the ACA), classify university hospitals as out-of-network because they offer more advanced and costly treatments (not prohibited by the ACA), sell lower-quality and discriminatory insurance outside the ACA-regulated exchange (allowed by the ACA), and void coverage of policyholders who receive of charitable premium assistance (allowed by the ACA). 

First, consumers don’t have perfect information to make a choice (think hundred-page insurance policy documents full of technical jargon).  Second, the consequences of choosing badly are very high (buying a policy that doesn’t wind up covering what you thought it did, and thus not being able to afford medical services you need).  It’s precisely in such a situation when market intervention is economically and morally justified.  Accordingly, the ACA cleverly packaged governmental financial intervention (subsidies to help consumers pay premiums) with regulatory intervention (rules about exchange-qualified insurance policy content and inclusivity + a consumer mandate to purchase insurance).

But, when financial and regulatory interventions the private market fail, it’s time for public provision of the good: single-payer government-provided universal health insurance. 

The inevitable solution  

Give everyone health insurance.  Not just old people (Medicare), really poor people (Medicaid), military and veterans (VA), and US Congresspeople. 

“Everyone” means everyone living legally in the US:  citizens, permanent residents (i.e., green card holders), and temporary residents (i.e., visa holders).  Tourists and undocumented residents continue to finance their own insurance or pay out of pocket.  In other words, if you are obliged to pay taxes in the US, you are currently obliged to get insurance under the ACA.  Under a future single-payer universal health insurance system, that same group of people would be automatically enrolled. 

As part of such a system redesign, it would be imperative to true up government insurance payouts with actual private healthcare provider costs.  And, with or without universal health insurance, it is imperative to address skyrocketing healthcare delivery costs.  Single-payer insurance would be simpler, but it wouldn’t by any means be simple – and it wouldn’t be successful in a policy vacuum.

The (losing) compassion argument  

The argument from compassion and fairness using “patient stories” doesn’t work well on strict ideological conservatives…for the same reason that images of forlorn polar bears on tiny ice floes don’t work.  Voluminous academic research in political psychology has shown that conservative brains respond to messages of personal responsibility and purity, not fairness and tolerance.  Their primary objective is moral order, not equality. 

The psychology of political belief formation explains that conservatives tend to see the world as reductively black-and-white, resist grayscale compromise, heavily discount the future, and reserve loyalty and empathy for a parochially-defined in-group.  In practice, they value deterring a small number of free riders more than helping a larger number of qualified beneficiaries of government programs.  As I’ve written elsewhere, this can be constructively framed as a conservative policy preference for avoiding Type I errors of over-inclusion (a.k.a. statistical “gullibility”) over avoiding Type II errors of over-exclusion (a.k.a. statistical “blindness”).  

Though it may sound harsh to the unfamiliar and to the idealistic, it is uncontroversial to note that compassion is relatively low on conservatives’ list of values.  They say it, their voting reflects it, and research confirms it.

The (winning) data argument 

The argument that data makes universal health insurance inevitable uses language resonant with conservatives.  The inescapable reality of big data and advanced predictive analytics leads organically to a logical conclusion of where it’s all going.  No special pleading is necessary.

We’re not talking about moral reframing to improve the argument.  Moral re-framing is how pro-intervention environmental pragmatists (e.g., the Pentagon, scientific community, property insurers, 144 Paris Agreement ratifying nations) will eventually win the climate change action debate among American voters.  They’ll argue for restoring past environmental purity, protecting America from climate-driven military conflicts, preventing storm damage to private property, and conserving the sanctity of God’s creation — rather than argue for “compassionately” investing in future generations’ welfare, preserving evolutionary biodiversity, or protecting faraway wilderness. 

Moral reframing of single-payer universal health insurance might go something like this:  “It’s the purest way to guarantee loyal taxpayers the liberty to freely choose where to receive medical treatment. Our current system of many private insurance companies burdens families with complex restrictions on who’s in which network and who can go to which hospital.  Single-payer insurance will restore the foundational American ideal of all hard-working individuals having dominion over their personal health outcomes.”  That’s true.  But, it’s a weak argument.

The data argument is an appeal to logic that doesn’t rely on out-group empathy:  prediction gets better, risk pool wheeling and dealing gets worse, and we end up in the 1997 dystopian film “Gattaca”.  Political ideology can’t insulate anyone from predictive analytics.  Soon enough, data and policy could empower private insurers to cut fat-but-healthy people off for being on a path to expensive diabetes treatment.  Only immense wealth can offset being denied insurance and having to pay 10s or 100s of thousands of dollars out of pocket for medical treatment.

Experience over empiricism underlies conservatism’s skepticism toward change.  But, at some point, empirical reality becomes so well-documented that it’s undeniable.  That’s when we get single-payer universal health insurance. 

How soon is “inevitable”?  

Hillary Clinton’s 1993 healthcare proposal was met with claims that there was no healthcare crisis.  Two decades later when Barack Obama enacted the ACA, no credible voices dared claim that there wasn’t a healthcare crisis.  Reality is chipping away at ideology.

Demographics will inexorably shift the country to the left.  So goes the platitude of frustrated social progressives.  But, it’s taking longer than predicted.  The political right is successfully kicking the can down the road with gerrymandering.  Similarly, neglected neighborhoods eventually rebound, but that can realistically take more than a generation – longer than beleaguered residents or gentrifying pioneers expect to tough it out.  In 2017, there are places only recently recovered from the 1968 riots.

Likewise, trends suggest that data and its consequences will force the issue of universal health insurance…eventually.  We may tinker with band-aid solutions for quite a while longer.  The pain must become extreme to change the minds of entrenched skeptics.  But data — like demographics and economic cycles and water — will ultimately force its own agenda.  It’s just a matter of time.

March 2017

Posted in >10 min read, Business topics, Social issues, [All posts]