binary thinking

Cognition is more complicated than IQ.
Cognition is more complicated than IQ.

I try to stay out of political theory on this blog, but Vox Day’s essay on the differences between the “VHIQ” and the “UHIQ” struck me as intellectually interesting enough that I felt like exploring it further. Personally, I don’t know what my IQ is, so that means I am merely above average*, since only people with very/ultra-high IQ seem to be motivated to willingly take the test. VD lists a number of plausible qualitative traits, of which the following caught my eye:

VHIQ inclines towards binary either/or thinking and taking sides. UHIQ inclines towards probabilistic thinking and balancing between contradictory possibilities.

VHIQ is uncomfortable with chaos and seeks to impose order on it, even if none exists. UHIQ is comfortable with chaos and seeks to recognize patterns in it.

VHIQ is competitive. UHIQ doesn’t keep score.

VD later goes on to quote Wechsler, the founder of the IQ test, at length and summarizes:

Wechsler is saying quite plainly that those with IQs above 150 are different in kind from those below that level. He is saying that they are a different kind of mind, a different kind of human being.

The division into binary groups here – “normal human” (sub-150 IQ) and the Next (150+), and then at the next iteration between VHIQ and UHIQ, is confusing to me, particularly since it is IQ itself being used to classify people into the binary choices. In the comments, VD clarifies (?) that “It’s entirely possible for a 175 IQ to be VHIQ and for a 145 IQ to be UHIQ” but that just moves the binary classifying to a relative scale than an absolute one. Since he also asserts that you need to be at least +3 SD (ie, IQ of 145) to even qualify as VHIQ, it’s clear that the numbers do matter.

There’s a glaring circularity here that I am doing a poor job of articulating. I’ll just make note of it and move on.

VD’s excerpted passage from Wechsler is, however, nonsense. He created an empirical test, intended to assess “varying amounts of the same basic stuff (e.g., mental energy)” and then made it into a score. I have worked with neurologists before and they make the same category error that psychologists like Wechsler do, in ascribing quantitative rigor to tests like the Expanded Disability Status Scale (EDSS). Just because you can ask someone a bunch of qualitative questions and then give them a “score” based on a comparison of their answers to those of a “baseline” person, does not mean you have actually magically created a quantitative test. Wechsler’s very use of the word “quantitative” is an abuse of language, a classic soft-sciences infatuation with concepts best left to hardsci folks. There’s nothing quantitative about the WAIS whatsoever, until you look at aggregate results over populations. Wechsler lacked even a basic understanding of what human cognition’s base units might be – certainly not hand-wavy bullshit like “mental energy”. Volumetric imaging with DT-MRI is probably the only actual quantitative method the human race has yet invented to probe that “basic stuff” of which Wechsler dreams; but there are some serious engineering constraints on how far we can go in that direction.**

Human cognition isn’t so easily captured by a single metric, even one built on such muddy foundation as the WAIS. It’s chaotic, and emergent, and inconsistent. This infatuation with pseudo-qualitative testing isn’t limited to WAIS; people overuse Meyers-Briggs and over-interpret fMRI all the time. Do qualitative metrics like WAIS or EDSS have value in certain contexts? Of course. However, as a signpost towards Homo Superior, it’s no better than Body Mass Index.

* Why bother with false modesty? I do have a PhD in an applied physics field, after all, and I scored higher than VD on that one vocab test, so empirically it seems reasonable to suppose I am somewhat ahead of the curve.

** spouting off about fMRI in this context is a useful marker of a neurosci dilettante.

The Hugo Awards and political correctness

hugo_sm

The Hugo Awards are science fiction’s most celebrated honor (along with the Nebula Awards). This year there’s a political twist: the accusation that the Hugos are “politically correct” and favor liberal writers over those with conservative political leanings.

The fact that Orson Scott Card won the Hugo in both 1986 and 1987 for Ender’s Game and Speaker for the Dead, or that Dan Simmons won a Hugo in 1990 for Hyperion, is sufficient evidence to prove that no such bias against conservative writers exists [1].

The current controversy is a tempest in a teapot, originating because two conservative writers (Larry Correia and Theodore Beale aka “Vox Day”) have decided to make an example out of the entrenched political correctness that both are convinced exists (see: confirmation bias). Here is Correia’s post about his actions and here is Beale’s. One of the common mantras of these people is that their hero, Robert Heinlein, would not be able to win a Hugo in today’s politically correct world.

Past SFWA president, Hugo winner, and all-around good guy on the Internet, John Scalzi definitively refutes the idea that Heinlein would not have won a Hugo and does so with genuine insight and understanding of who Heinlein was, what he wrote, and how Heinlein himself promoted SF as a literary genre. Key point:

When people say “Heinlein couldn’t win a Hugo today,” what they’re really saying is “The fetish object that I have constructed using the bits of Heinlein that I agree with could not win a Hugo today.” Robert Heinlein — or a limited version of him that only wrote Starship Troopers, The Moon is a Harsh Mistress and maybe Farnham’s Freehold or Sixth Column — is to a certain brand of conservative science fiction writer what Ronald Reagan is to a certain brand of conservative in general: A plaster idol whose utility at this point is as a vessel for a certain worldview, regardless of whether or not Heinlein (or Reagan, for that matter) would subscribe to that worldview himself.

They don’t want Heinlein to be able to win a Hugo today. Because if Heinlein could win a Hugo today, it means that their cri de coeur about how the Hugos are really all about fandom politics/who you know/unfairly biased against them because of political correctness would be wrong, and they might have to entertain the notion that Heinlein, the man, is not the platonic ideal of them, no matter how much they have held up a plaster version of the man to be just that very thing.

Read the whole thing.

In fact, the whole idea that the Hugo are biased against conservatives is a form of political correctness in and of itself. Steven just linked this article about how political correctness is a “positional good” and summarizes:

briefly, a positional good is one that a person owns for snob appeal, to set oneself apart from the rabble. Ownership of the positional good is a way of declaring, “I’m better than you lot!” And it continues to be valued by the snob only as long as it is rare and distinctive.

The idea, then, is that being one of the perpetually aggrieved is a way of being morally superior. I’m open-minded and inclusive, which makes me better than all those damned bigots out there.

Of course, Steven is invoking this idea as a critique about liberals crying racism; he overlooks the same dynamic at work by conservatives crying about exclusion, possibly because he is sympathetic to the “Hugos are biased” claim.

Regarding that claim, Scalzi had meta-commentary on the controversy overall (“No, the Hugo nominations were not rigged“) that is worth reading for perspective. It’s worth noting that Scalzi’s work was heavily promoted by Glenn Reynolds, of Instapundit fame, back in the day, a debt Scalzi is not shy about acknowledging publicly. This should, but won’t, dissuade those inclined (as Correia and Beale are) to lump Scalzi in with their imaginary “leftist” oppressors.

I’ve decided to put my money where my mouth is and support the Hugos by becoming a contributing supporter [2] for the next year. This will allow me to vote on nominees and I will receive a packet of nominees prior to the actual voting, which if you think about it, is an incredible value. If you’re interested in supporting the Hugos against these claims of bias, consider joining me as a contributor yourself. Now that I’m a member, I plan to blog about the nominations process as well, so it should be fun.

RELATED: Scalzi’s earlier post about The Orthodox Church of Heinlein. Much like the Bible, and history, the source material often gets ignored.

[1] To be fair, Card and Simmons aren’t really conservative – they are certifiable lunatics. See here and here.

[2] Here’s more information about becoming a member for the purposes of voting for the Hugos. This year’s convention will be in London, “Loncon3” so membership is handled through their website.

Reason is a limited process and can never explain everything objectively

Reason is a limited process because it arises from consciousness, which observes the universe via filters. The mind has physiological filters (ex, wavelengths of light you can perceive, frequencies of sound you are limited to), chemical filters (the specific biochemistry of your brain, your mood and emotion, etc), and mental filters (pre-existing ideas and biases, fidelity of your metal models and assumptions, simple lack of knowledge). These are all best understood of as filters between you and the “objective” truth of reality. The universe as you observe it and understand it is vastly more complex than you can understand. The process of reason happens only with information you can process at the end of that chain of filters, so you are always operating on an insufficient dataset.

The brain is actually evolved to extrapolate and infer information based on filtered input. The brain often fills in the gaps and makes us see what we expect to see rather than what is actually there. Simple examples are optical illusions and the way the brain can still make sense of the following sentence:

Arinocdcg to rencet rseaerch, the hmuan brian is plrectfey albe to raed colmpex pasasges of txet caiinontng wdors in whcih the lrettes hvae been jmblued, pvioedrd the frsit and lsat leetrts rmeian in teihr crcerot piiotsons.

As a result, there are not only filters on what we perceive but also active transformations of imagination and extrapolation going on that actively modify what we perceive. These filters and transformations all happen “upstream” from the rational process and therefore reason can never operate on an untainted, objective reality. Despite the filters and transformations, the mind does a pretty good job, especially in the context of human interactions on the planet earth (which is what our minds and their filters and transformations are optimized for, after all). However, the farther up the metaphysica ladder we go, the more we deviate from that optimal scenario for which we are evolved (or created, or created to have evolved, or whatever. I’ve not said anything to this point that most atheists and theists need disagree on).

A good analogy is that Newton’s mechanics were a fantastic model for classical mechanics, but do not suffice for clock timing of GPS satellites in earth orbit. This is because Newton did not have the tools available to be aware of general relativity. Yes, we did eventually figure it out, but Newton could not have ever done so (for one thing, his civilization lacked the mathematical and scientific expertise to formulate the experiments that created the questions that Einstein eventually answered).

Godel’s theorem makes this more rigorous by demonstrating that there will always be statements that can neither be proved nor disproved by reason. In other words, as Douglas Hoftstadter put it, Godel proved that provability is a weaker notion than truth, no matter what axiom system is involved. That applies for math and it applies to philosophy, it applies to physics and it applies to the theism/atheism debate.

Related – more detailed explanation of Godel’s Theorem and implications on reason.

pathological chemistry

FOOF goes BOOM
By way of this entertaining tall tale about how really nasty chemical compounds make for the best rocket fuels (with some conspiracy theorizing about “red mercury” and Chernobyl thrown in for fun), I ended up reading about FOOF, and was treated to one of the more entertaining lines of text I’ve read in some time:

If the paper weren’t laid out in complete grammatical sentences and published in JACS, you’d swear it was the work of a violent lunatic.

Context is king, so start here and then go here. Any chemists in the house?

(here’s the paper online with link to full text PDF!)

Just Another Day #goodbyeEureka – thank you, @SyFy

Eureka, the scifi show on Syfy about a crazy town full of geniuses, has ended. They gave us 5 great seasons and I am grateful to Syfy for allowing them to produce the “series finale” episode as a send-off to all the characters, something that Stargate: Universe never did get.

The best thing about Eureka wasn’t the science fiction or the high concept. It was teh characters – they had more heart and were more authentic than most scifi shows. Firefly was full of wisecrackin’ badasses, but the only person who really was genuine was Kaylee; Eureka had an entire cast full of Kaylees. Stargate Universe was character driven but was more about the high-concept of true exploration of the Unknown, and it did that brilliantly, but the appeal was different. You can’t compare Eureka to SGU in that way. In fact, if anything, the template for Eureka was The Cosby Show, which served to inform mainstream America that here was an upper-class African American family, with the same dreams and problems as everyone else. Eureka took that template and applied it to Science and scientists, normalizing them the same way. The only way you do that is with a cast of genuinely interesting people, with an authenticity to the chemistry and camraderie that clearly isn’t limited to the screen.

Regardless of why it was great, it’s over, and though of course I have my usual issues about the broken model of television and cable and the perverse incentives that seem to bury the shows I want to watch while rewarding the ones I don’t, I can accept it. Eureka and Farscape and SGU still exist, I did watch them, and I loved them. And I can recommend them to others here on my blog in the hope that others will be enriched by them as I was.

Debating Dyson spheres

a wonderfully geeky debate is unfolding about the practicality of Dyson Spheres. Or rather, a subset type called a Dyson Swarm. George Dvorsky begins by breaking the problem down into 5 steps:

  1. Get energy
  2. Mine Mercury
  3. Get materials into orbit
  4. Make solar collectors
  5. Extract energy

The idea is to build the entire swarm in iterative steps and not all at once. We would only need to build a small section of the Dyson sphere to provide the energy requirements for the rest of the project. Thus, construction efficiency will increase over time as the project progresses. “We could do it now,” says Armstrong. It’s just a question of materials and automation.

Alex Knapp takes issue with the idea that step 1 could provide enough energy to execute step 2, with an assist from an astronomer:

“Dismantling Mercury, just to start, will take 2 x 10^30 Joules, or an amount of energy 100 billion times the US annual energy consumption,” he said. “[Dvorsky] kinda glosses over that point. And how long until his solar collectors gather that much energy back, and we’re in the black?”

I did the math to figure that out. Dvorsky’s assumption is that the first stage of the Dyson Sphere will consist of one square kilometer, with the solar collectors operating at about 1/3 efficiency – meaning that 1/3 of the energy it collects from the Sun can be turned into useful work.

At one AU – which is the distance of the orbit of the Earth, the Sun emits 1.4 x 10^3 J/sec per square meter. That’s 1.4 x 10^9 J/sec per square kilometer. At one-third efficiency, that’s 4.67 x 10^8 J/sec for the entire Dyson sphere. That sounds like a lot, right? But here’s the thing – if you work it out, it will take 4.28 x 10^28 seconds for the solar collectors to obtain the energy needed to dismantle Mercury.

That’s about 120 trillion years.

I’m not sure that this is correct. From the way I understood Dvorsky’s argument, the five steps are iterative, not linear. In other words, the first solar panel wouldn’t need to collect *all* the energy to dismantle Mercury, but rather as more panels are built their increased surface area would help fund the energy of future mining and construction.

However, the numbers don’t quite add up. Here’s my code in SpeQ:


sun = 1.4e9 W/km2
sun = 1.4 GW/km²

AU = 149597870.700 km
AU = 149.5978707 Gm

' surface of dyson sphere
areaDyson = 4*Pi*(AU^2)
areaDyson = 281229.379159805 Gm²

areaDyson2 = 6.9e13 km2
areaDyson2 = 69 Gm²

' solar power efficiency
eff = 0.3
eff = 0.3

' energy absorbed W
energy = sun*areaDyson2*eff
energy = 28.98 ZW

'total energy to dismantle mercury (J)
totE = 2e30 J
totE = 2e6 YJ

' time to dismantle mercury (sec)
tt = totE / energy
tt = 69.013112491 Ms

AddUnit(Years, 3600*24*365 seconds)
Unit Years created

' years
Convert(tt, Years)
Ans = 2.188391441 Years

So, I am getting 2.9 x 10^22 W, not 4.67 x 10^8 as Knapp does. So instead of 120 trillion years, it only takes 2.2 years to get the power we need to dismantle Mercury.

Of course with the incremental approach of iteration you don’t have access to all of that energy at once. But it certainly seems feasible in principle – the engineering issues however are really the show stopper. I don’t see any of this happening until we are actually able to travel around teh solar system using something other than chemical reactions for thrust. Let’s focus on building a real VASIMIR drive first, rather than counting our dyson spheres before they hatch.

Incidentally, Dvorsky points to this lecture titled “von Neumann probes, Dyson spheres, exploratory engineering and the Fermi paradox” by Oxford physicist Stuart Armstrong for the initial idea. It’s worth watching:

UPDATE: Stuart Armstrong himself replies to Knapp’s comment thread:

My suggestion was never a practical idea for solving current energy problems – it was connected with the Fermi Paradox, showing how little effort would be required on a cosmic scale to start colonizing the entire universe.
[…]
Even though it’s not short term practical, the plan isn’t fanciful. Solar power is about 3.8×10^26 Watts. The gravitational binding energy of Mercury is about 1.80 ×10^30 Joules, so if we go at about 1/3 efficiency, it would take about 5 hours to take Mercury apart from scratch. And there is enough material in Mercury to dyson nearly the whole sun (using a Dyson swarm, rather than a rigid sphere), in Mercury orbit (moving it up to Earth orbit would be pointless).

So the questions are:

1) Can we get the whole process started in the first place? (not yet)

2) Can we automate the whole process? (not yet)

3) And can we automate the whole process well enough to get a proper feedback loop (where the solar captors we build send their energy to Mercury to continue the mining that builds more solar captors, etc…)? (maybe not possible)

If we get that feedback loop, then exponential growth will allow us to disassemble Mercury in pretty trivial amounts of time. If not, it will take considerably longer.

Transparent aluminum? That’s the ticket, laddie

So, it’s actually a thing – called ALON. It’s not so much a metal as an aluminum-based ceramic called aluminum oxynitride, but the point is, it’s aluminum, and it’s transparent:

there be no whales here

and this stuff is strong – 1.6″ is enough to stop a .50 AP bullet that easily passes through twice that thickness of laminated glass armor:

aye, ol’ Scott woulda been proud. And just for old times’ sake:

Let’s take this opportunity to correct a misconception: they did NOT use transparent aluminum for the whale tank. They traded the “matrix” for it to the engineer at the large plate glass manufacturing place in exchange for enough conventional plate to build the tank. Which was a lot.

MAGNIFICENT – string theory FTW and loop quantum gravity FAIL

Calabi-Yau for the win!!
I’ll freely admit that I am comprehending only about 10% of the argument, but this is still a magnificent post about why string theory is right and why loop quantum gravity is wrong.

And incidentally also reveals that the science writers on Big Bang Theory really are on top of the game. Sheldon’s snort of derision here is utterly justified.

“I’m listening. Amuse me.”

I really need to start watching the show. Hulu or Netflixable, I assume…

astounding Cassini video flyby of Saturn’s rings and moons

(UPDATE: credit due, via Mark. Who acounts for a disturbingly large number of my “neato lookit” posts of late.)

This is incredible – a digital compilation of images from the Cassini probe, no CGI or animation, assembled into incredible breathtaking flybys of the Saturn system. The best part os the third, final sequence where we flyby Titan, Mimas, pass thru the ring-plane, and swoop past Enceladus.

5.6k Saturn Cassini Photographic Animation from stephen v2 on Vimeo.

I’ve a photo of me from 1996 as a visitor to JPL (where my friend’s dad worked) in front of the Cassini heat shield. I really need to dig that up… Let’s also remember that the controversy about Cassini being nuclear powered was totally bogus, and use that as a data point for why nuclear power is not the ultimate bugaboo that people assume it to be after the still-unfolding tradegy and disaster in Japan.

diet, cholesterol, and heart disease skepticism?

I’m involved in a debate over diet and health over at Dean’s and in the course of that debate, was encouraged to read a paper by Corr et al. that suggests low-fat diets are essentially useless for reducing heart disease. This post started out as a comment but it grew enough to warrant a post in it’s own right. So, let’s look at what the Corr paper is actually saying, shall we?

The international bodies which developed the current recommendations based them on the best available evidence[1-3]. Numerous epidemiological surveys confirmed beyond doubt the seminal observation of Keys in the Seven Countries Study of a positive correlation between intake of dietary fat and the prevalence of coronary heart disease[4] although recently a cohort study of more than 43,000 men followed for 6 years has shown that this is not independent of fiber intake[5] or risk factors. The prevalence of coronary heart disease has been shown to be correlated with the level of serum total and low density lipoprotein cholesterol (LDL) as well as inversely with high density lipoprotein.

So, high intake of dietary fat indeed has a positive correlation for coronary heart disease. Corr is conceding this at the very start!

Further, coronary heart disease is also indeed associated with high LDL and low HDL. So far I am not seeing any Cholesterol Conspiracy here… the ADA seems to be right on the ball.

So, we’ve already established that CHD is associated with high fat, high LDL, and low HDL. So, what’s left to argue about?

As a consequence of these studies, it was assumed that the reverse would hold true: reduction in dietary total and especially saturated fat would lead to a fall in serum cholesterol and a reduction in the incidence of coronary heart disease. The evidence from clinical trials does not support this hypothesis.

Hmm. Two sentences here. one about a reasonable inference from the conceded association between fat and LDL with CHD. But ok, let’s call the question of whether teh reverse is true, Question A – “does reducing fat and LDL in the diet reduce CHD?”

And then another sentence, about evidence from clinical trials not supporting that inference. What about those clinical trials, exactly?

It can be argued that it is virtually impossible to design and conduct an adequate dietary trial. The alteration of any one component of a diet will lead to alterations in others and often to further changes in lifestyle so it is extremely difficult to determine which, if any, of these produce an effect. Dietary trials cannot generally be blinded and changes in the diet of the ‘control’ population are frequently seen: they may be so marked as to render the study irrevocably flawed. It is also recognized that adherence to dietary advice over many years by large population samples, as for most people in real life, is poor and that the stricter the diet, the worse the compliance.

Ah. so the available evidence from clinical trials is fundamentally suspect to systematic error. Fair enough. So, any conclusions we make from them should be tempered with that, right?

(long analysis of clinical trials in literature follows)

The message from these trials is that dietary advice to reduce saturated fat and cholesterol intake, even combined with intervention to reduce other risk factors, appears to be relatively ineffective for the primary prevention of coronary heart disease and has not been shown to reduce mortality.

OK, so the trials focusing on low-fat diets alone didn’t show any primary prevention benefit. Well, see caveat above, right? (and Corr’s noted exception about the MRFIT study…)

However, what about secondary prevention?

well, good! But still, is there some reason that maybe we aren’t seeing better results here? Is diet necessary, or sufficient? Let’s look at studies that not only remove fat, but also add HDL:

The first successful dietary study to show reduction in overall mortality in patients with coronary heart disease was the DART study reported in 1989[20]. The three-way design of this ‘open’ trial compared a low saturated fat diet plus increased polyunsaturated fats, similar to the trials above, with a diet including at least two portions of fatty fish or fish oil supplements per week, and a high cereal fibre diet. No benefit in death or reinfarctions was seen in the low fat or the high fibre groups. In the group given fish advise there was a significant reduction in coronary heart disease deaths and overall mortality was reduced by about 29% after 2 years, although there was a non-significant increase in myocardial infarction rates. The reduction in saturated fats in the fish advice group was less than in the low fat diet group and there was no significant change in their serum cholesterol.

Finally, the more recent Lyon trial[21] used a Mediterranean-type of diet with a modest reduction in total and saturated fat, a decrease in polyunsaturated fat and an increase in omega-3 fatty acids from vegetables and fish. As in the DART study there was little change in cholesterol or body weight, but the trial was stopped early following a 70% reduction in myocardial infarction, coronary mortality and total mortality after 2 years.

In other words, adding HDL to your diet helps a lot, whereas reducing polyunsaturated fat (or just increasing fiber) still doesn’t seem to do anything. We’ve established that a modest increase in HDL can help. But have we established that a modest reduction in LDL will not help?

Unfortunately, the design and conduct of these trials are insufficient to permit conclusions about which polyunsaturates and other elements of these diets are the most beneficial. The long term effects of these trials[20,21] and the compliance with the dietary regimes remain to be seen.

So, we don’t really know if these studies answer that question. It’s possible that lowering LDL has a longer-timescale benefit than increasing HDL. These studies don’t answer the question either way, because of the limitations Corr concedes – certainly we haven’t proven that lowered LDL is not genuinely helpful yet.

Anyway, how much LDL was really reduced anyway?

An important aspect of the lipid-lowering dietary trials is that on average they were only able to achieve about a 10% reduction in total cholesterol. The results of recent drug trials have demonstrated that there is a linear relation between the extent of the cholesterol, or LDL, reduction and the decrease in coronary heart disease mortality and morbidity, and a significant effect seen only when these lipids are lowered by more than 25%[23].

Ahhhh. Corr goes on to quote a bunch of studies that show frankly awesome improvements in mortality using drugs to lower LDL by 25% or more.

(in other words, definitively proving that lower LDL does indeed reduce heart disease. We just answered Question A from above).

So, let’s summarize:

conceded by Corr at the outset:
– increased HDL reduces CHD.
– increased fat increases CHD.
– increased LDL increases CHD.

dietary trials:
– somewhat lowered LDL does not reduce CHD.

drug trials:
– significantly reduced LDL does reduce CHD.

caveats:
– dietary trials have systematic errors.
– long-term trials on reducing LDL have not been performed.

special note: The MRFIT trial follow up focused on reducing LDL diet alone, and did show reduced myocardial infarctions over a longer term.

My conclusion from this would be that a. increase HDL now for immediate benefit, and b. reduce fat and LDL in my diet for long term benefit. Seems obvious enough, and fully in accord with what the ADA recommends.

Corr’s conclusion?

diets focused exclusively on reduction of saturated fats and cholesterol are relatively ineffective for secondary prevention and should be abandoned.

umm.. what?!?!

This is where they cross over into vaccines-autism and flouridated water territory, frankly.

What would have made the Corr paper immeasurably stronger would have been for them to devise an experiment that would answer these questions and fill the gaps. That’s always my challenge to these self-styled “skeptics” of the scientific consensus. What’s the experiment you propose? What would you do to make your case?

That’s how science works. Theory drives experiment, experiment refines theory, and back again. If your claim is that available evidence (in this case, clinical trials) don’t support the contention, that’s not enough. You need to come up with an experiment that actually refutes the contention. Formulate your hypothesis and test it! Anything else is just nitpicking from the sidelines, which is how most of these agenda-driven meta-analyses end up reading.

Frankly, I am very much eager to be able to dispense with the low-fat, low-cholesterol crap. Here’s why in a nutshell.

So please, Dr Corr and anyone other “cholesterol skeptics” out there. Show me the proposal for your experiment, and I guarantee you the fast food industry will show you the money.