Warding off viruses of the mind – Critical thinking skills w/ Dr. Tim Sharpe & Ari Whitten

head_shot_ari
Content By: Ari Whitten & Dr. TIm Sharpe

In this episode, we’re talking about the best framework for using your brain effectively to deal with and navigate the terrain of both information overload and the spread of misinformation.

Table of Contents

In this podcast, Dr. Sharpe and I discuss:

  • How to successfully investigate the truth
  • The most important questions to ask yourself anytime you have a discussion with someone
  • What is ‘signaling’ and how it affects our perceptions of others
  • Why communicating via text and social media fails so often
  • The importance of trying to prove your own self wrong before debating with others
  • The concepts of viruses of the mind
  • How to analyze information when you have opposing data
  • Why being wrong is okay and what to do when you realize your viewpoint is not correct
  • The importance of understanding the consequences of both actions and inactions
  • Dr. Sharpe’s best tips for having productive conversations

Listen or download on iTunes

Listen outside iTunes

Transcript

Ari: Hey, there. Welcome back to The Energy Blueprint podcast, I’m your host Ari Whitten. With me now is my friend Tim Sharpe, Dr. Tim Sharpe, who is an adjunct faculty member in the postgraduate Human Nutrition and Functional Medicine program at the University of Western States, where he teaches integrative therapies to physicians and allied health professionals. He has a doctorate in acupuncture and Chinese medicine and a Masters of Science in Human Nutrition and Functional Medicine.

Dr. Sharpe has 15 years of clinical experience diagnosing and treating complex diseases. He currently serves as the head nutritionist at The Institute for Human Kinetics. He published his first peer-reviewed paper in 2016 and has multiple original research projects currently in various stages of the peer-review process. His work has been honored by ResearchGate with an assortment of achievements, including most read author in the United States, and most-read publication in the following areas; medicine, nutrition and dietetics, sports medicine, and physiology.

Dr. Sharpe’s current passion and what we’re going to be talking about in this podcast centers on critical thinking. He has a new blog titled Reason Chasing and an upcoming podcast by the same name, which I’m very excited about, by the way, because I know Tim and I’ve had lots of conversations and he’s one of the smartest, clearest-thinking people that I know. I’m very excited for him to do this podcast talking about that subject and for us to be doing this podcast now on The Energy Blueprint podcast, talking about that same subject. Welcome, Tim, such a pleasure to have you.

Dr. Sharpe: Thank you for that introduction. Those honors go in a weekly fashion, so if you can really knock it out of the park for one week, then [laughs] you can do well there. You’ve had a lot of academics and physicians and laypersons on and you’ve had a lot of people that have a lot more published work than I do.

Ari: On that note, I’ll say just on a personal level, you’re one of the people that I appreciate engaging with the most in my own personal life. You’re someone that we can have personal conversations with. You can come in, you can challenge me on certain positions, I can challenge you, we can have that discussion. I don’t really like engaging in those kinds of dynamics and debates unless I know the person is worth my time.

That they’re a highly intelligent person who is going to help me find flaws in my thinking and arrive at some deeper insights, learn something, grow in my perspectives in some way. I really consider you to be one of a very small group of people who I will always devote that kind of time to because you always help me do that.

Dr. Sharpe: Well, thank you. I know we were talking earlier about the concept of a dialectic, and that’s really just the idea of investigating the truth of opinions, and the goal isn’t to be right. The goal is, “What can we learn from this? Where can I advance and where can you advance?” I’m not a statistician like my friend in J. Crew, khaki spokesperson, Brad Dieter, but I like to apply statistical concepts like Bayesian statistics into the way that I think.

There’s a concept in Bayesian statistics known as updating, where you take whatever it is that you currently believe, and that’s your starting point. Then when you take in new information, you look at that information and you look to see where does that move me on the continuum? A really good way to do that is with statistics, sorry, with percentages. If I say, “I’m 40% of concept x, 40% certain that this is true,” and someone presents me with information.

I can look at that and say, “That could be true, but is it true enough that it moves me from 40%?” Because it could be true enough that it just leaves me in the same place even though there was truth to what they said. If you have that continuum, where you slide the continuum based on the evidence that comes in, you can update up or down. In the concept of dialectic, you’re always actually really honestly trying to do that.

I think that’s why we get along in our conversations that could be contentious, but they’re not, because nobody’s trying to get it up on somebody. It’s, you said this thing, “I don’t understand. What do you mean?” Then you respond, and then we get to a place and both of us are in a different place than where we started.

The concept “viruses of the mind” explained

Ari: Absolutely. Well said. We have tentatively titled this podcast Approaches in Critical Thinking to Ward Off Viruses of the Mind, which I love. I wanted to ask you, I guess for listeners who are here right now listening, who are still deciding, “Hey, is this podcast for me, I want to learn about health.” Normally this podcast is talking about, “Hey, here’s some cool health knowledge about gut health or brain health or mitochondria or something like that, and here’s a few practical takeaways you can get.” This one’s a bit different. We’re talking about how to avoid viruses of the mind. First of all, explain what that is and then explain two or three big takeaways that you want people to get from listening to this episode.

Dr. Sharpe: Viruses of the mind is a concept introduced by Richard Dawkins in 1991 in an essay that actually titled Viruses of the Mind. The context of that is a little different than what I’m talking about and it’s irrelevant to this conversation, but the core of it is there are certain habits that all of us have. You could probably include biases in there. It’s certainly not exclusively limited to a bias, but there are certain habits that we have in the way that we think and the way that we respond to information. You can look at some of those as being viruses.

Some of those are parasitic on our ability to think clearly. If you can’t identify those and have good strategies for avoiding them– Well, first of all, recognizing that they exist and then avoiding them, and correcting them when you find that you’re engaging in those practices then it’s hard to get to the destination of where you’re trying to get. That’s foundational to that concept of dialectic, right? You’re trying to get somewhere and you can’t get there if your mind is taking you in the wrong direction constantly. That’s what I mean when I say viruses of the mind.

In terms of what I would like people to get from this, I teach in the Masters of Human Nutrition and Functional Medicine program. The context of what I’m teaching is medicine and sports performance, and GI health, Gastroenterology. Those are all obviously medical or sports medical-related, but I find that with my students it’s difficult for them to really take in some of the information if we can’t clear out some of the cobwebs in their thinking because when they look at data it can be confusing and it can be confusing for a lot of different reasons.

Some can be technical, but I don’t really care so much about that. Technical is you’re going to learn that, you’re not going to learn that. That doesn’t matter, but having an understanding– Well, it matters, but having an understanding of how to go about looking at things, it increases your ability to look at everything and to evaluate any evidence whether it’s with the latest research that you’re looking at or a blog post, or a debate, or just a conversation you’re having with a colleague.

I actually devote a fair chunk of my time in the classes that I teach to trying to help students with those concepts, and I think in that way this would be similarly relevant to your listeners because these are all things that we’re faced with every day. Having some tools and concepts to clarify that and make that a shortest distance to the destination approach I think is helpful.

Ari: I was listening to a podcast with Tristan Harris. Are you familiar with him?

Dr. Sharpe: I am, but I couldn’t cite any of that work. I just know the name.

Ari: He’s an expert on social media and basically technology in general, and how technology is interfacing with human lives. In particular, he’s really an expert on social media and our engagement with that, and how the algorithms work, and what they’re doing to the human brain as far as our attention and how that can lead to bad outcomes, in particular, how it is leading to bad outcomes. In addition, just how the current state of the Internet, in particular, has led to this massive war of information, of all these fragmented sources of information and all these competing and contradictory narratives.

It’s become so easy for people to operate in completely different realities where there’s no shared facts, no shared reality. I think one of my big goals that I hope people will get from these exchanges, maybe a way or framework for how to use your brain effectively to deal with this overload of information and misinformation, and how do we navigate that terrain.

Dr. Sharpe: No, I think people will definitely get that from what we talk about today. I’m excited.

Ari: Cool, let’s do it.

Dr. Sharpe: All right.

The mind is like a computer

Ari: Where do you think is the best place to start here?

Dr. Sharpe: I think of the mind in a lot of ways like a computer and the mind virus thing works out pretty well because obviously, you can get computer viruses as well. I like to think of the way that we think as essentially being our operating system. I run different programs in my operating system that help me think better and so just going through and talking about some of the set routines that I have always running in the background that I’m filtering most things through, I think would be beneficial.

Ari: These are like skillsets. They’re like mental models or skillsets or processes that you use when you’re engaging with information, whether information on scientific topics or political topics or whatever?

Dr. Sharpe: Yes, absolutely. They’re always sort of running. We’ll go through a few of them. One of the first ones, this is a really big concept. There’s a fairly recent group of people known as rationalists. One of the things that they talk about, which is amazing to me is what do I think I know and why do I think I know it? You could spend an hour just breaking that down. Anytime you’re having a discussion with somebody and you bring up a point it’s relevant to ask yourself, “Why do I think that is true? What is it that leads me to believe that this is the thing that actually is?”

The more you do that, the more you realize that the things you think you know, either you actually don’t really know them or if I scroll back to earlier in the conversation, when I was talking about Bayesian statistics and applying percentages, you may know it, but you know it to a percentage, to a degree. “I 60% believe this is true,” but when we talk about things, we don’t qualify them like that. Who does? It’s very uncommon to find somebody who qualifies their statements in a way that would help both parties understand what it is that we’re talking about.

If I said statement X to you and I was only 60% certain that that was true but I didn’t say that, you’re operating on I just told you something that I at least in my view feel is 100% true. Then you’re going to respond with your own partial version of a rebottle that you are whatever percentage certain on. We’re already building this discussion on matchsticks that are all sort of ready to crumble and the more we do that, the less solid the ground of the conversation becomes, and the more difficult it is to really get anywhere.

How culture may shape your way of thinking

Ari: Just to interject briefly, there’s one more element in there almost like a peer pressure cultural element where it’s regarded as masculine to be certain, to be excessively confident and black and white in one’s position, and it’s regarded as a sissy thing or any feat thing to not have tremendous confidence in this black and white position. I think that cultural framing that we have around it drives us in a bad direction a lot of times.

Dr. Sharpe: There are some really interesting concepts in the idea of signaling that has a lot to do with what you’re talking about there. There’s a great book called The Elephant in the Brain, and it goes into detail on the importance that signaling plays in our lives. When I say signaling what I’m referring to is the things that we think are important often aren’t actually, to us internally, if we were able to look at what was going on in our heads and in our hearts, they’re actually not the things that we’re representing to ourselves or to the outside world. We tend to send these signals and we act based on signaling.

That concept of masculine versus the feat would be an example of wanting to send that signal. Even if I knew I was 60% confident, I don’t want to look wishy-washy. How does that look when my peers see me and I’m like, “Well, kind of, I feel like it might be this.” Who wants to listen to that guy? I do. [laughs] I don’t want him to say it like that. I want him to say, “I’m 60% confident.”

Ari: What this made me think of is mask-wearing, which is obviously a very contentious topic right now, for many reasons. Political reasons and like, “Hey, I don’t feel you should remove my freedoms.” Then other people are saying like, “Hey, when you’re endangering lives you need to wear the mask because this is not just about you. It’s about protecting others as well.” We have those kinds of more politicized narratives, but then there’s the science as well, which is we have studies saying, “Hey, here’s this modeling study that says 66,000 infections in New York could have been prevented if everybody was wearing masks, therefore, we should all wear masks. Here’s the science.”

Then you also have a whole bunch of randomized controlled studies saying there’s no benefit to mask-wearing. It does not reduce the risk of contracting influenza or influenza-like illness. You can have science that you can point to pointing in both directions and making both arguments. I think this is an example of where if you’re engaging in this topic and you’re saying, “Hey, here’s the studies that support my position on whichever side of that debate you’re on.”

I think it’s critically important for people to actually understand the evidence and the degree of strength of that evidence that is supporting their position on either side of that debate to be able to say, “You know, there’s quite a bit of evidence suggesting this, I see that there’s some evidence suggesting the opposite.” In other words, maybe you shouldn’t have a really extreme black and white position on something given the limitations of that body of evidence.

Dr. Sharpe: I think there’s a lot of truth to what you said. I think it’s unquestionably true that wearing a mask or not wearing a mask is signaling. Whatever else it is, and it is other things, it is 100% also signaling. It’s a very relevant point to bring up.

Ari: When you say signaling, this is synonymous with the concept of virtue signaling, correct?

Dr. Sharpe: Well, virtue signaling would be– That would be a version of signaling and probably the strongest example of signaling. I can go through some types of signals that exist, an example like laughter, it’s certainly a response to humor, but it’s also a signal of you hold no hostility. When you’re having a conversation with somebody, if if I laugh, and I said something that you’re like, ah. You see me you laugh, you’re like, “Okay. I now know.” We’re constantly sending these signals.

A lot of the signals are through facial expressions, which of course, is one of the reasons why texting and instant messaging goes badly very quickly is we aren’t programmed to understand that the signals that we don’t know we’re sending, we are no longer sending. People talk about that. They know that using words instead of doing what you and I are doing face to face is fraught for mishap that they don’t really think all the way through to the root cause which is the signaling. Another–

Ari: Just define what virtue signaling is for people unfamiliar with that term.

Dr. Sharpe: When you want to demonstrate to another person that you hold a certain belief or you belong to a certain tribe, or camp or ideology, you say or do things that indicate that you hold that belief or that you’re part of that group. It’s not inherently a bad thing. It gets a really bad rep. The problem is when it’s all virtue, or all virtue signal and no real meaning or intent or motive for change. If you have skin in the game and you’re doing stuff, then virtue signaling isn’t necessarily a bad thing.

I’ll give you an example of where something like a virtue signal is a net social good. If I was to write, “I just gave $100 to the Help Stop Lupus Foundation.” If someone sees that, there’s clearly virtue signaling, right? You’re like, I just spent a bunch of money, you said the quantity and you said where, but someone sees that they might think, “You know, I really should do that too.” There’s no negative animus in that whatsoever.

It is 100% a virtue signal, and it’s a net good. Now, there are people that could look at that and say, “Oh, well, you’re just bragging about–” and that also, the just part is wrong, but you are bragging about that. Yes, that’s kind of true. That’s part of what a signal is. If you get too tied up in that, you’re failing to recognize that you yourself, because we all are, are like 90% signaling.

If you get really tied up in the concept of virtue signaling, be careful because you’re doing it a lot. It’s just how much of it is technically virtue-signaling versus all the other types of signaling that we could be doing, the laughter, what happens with conversation, education. You and I are both big fans of education, are we doing it to learn? I mean.

Ari: I’m doing it to virtue signal. I don’t know about you.

Dr. Sharpe: I know. Well, as someone who teaches in a graduate program, I shouldn’t be saying that people aren’t going to school to learn, but that’s only a part of it. If you really wanted to learn whatever thing that you’re learning, is that the only way you could do it? Like say, I teach sports nutrition, so I’ll just take my own, is the only way you can learn sports nutrition by enrolling in a graduate program and taking sports nutrition from me?

Or could you save $3,000, or however much it is, and do a Coursera course in sports nutrition, and then supplement it by going to someone who does that in your community and works with a local college and saying, “Hey, can I shadow you? Can I see what you’re doing, and how you’re working with athletes and what kind of dietary prescriptions?”

Absolutely, you could do that. One could argue that might be a better way to do it. That’s not what people do, and I’m glad that they don’t because it pays my bills.

That’s what almost nothing is what it seems on the surface in terms of motivations. Just to tie a bow on that, I think, in general, it’s a good thing to recognize that in yourself, because you can look to see what are my motivations here. You don’t want to dig in and internally focus all the time because research shows that people that are always inwardly focusing tend not to live super happy lives, but you need to have the skill of being able to do that when necessary to recognize if you need to pull back and not go the direction that you’re going and just recalibrate.

What do you think you know, and why do you think you know it?

Ari: I think I took us on a little bit of a digression there. Let’s get back to, “What do I think I know and why do I think I know it?”

Dr. Sharpe: Sure. I think using that concept, what it allows you to do is stay true to whatever point that you were trying to make. It allows you to even stop and think, “What point was I trying to make?” Then there are some other concepts that are related to that one, like, there’s this idea of heavy lifting words. An example would be, a lot of, or mostly, or usually. If you catch yourself using those words, stop and think to yourself, “What does ‘a lot of’ mean? Am I properly communicating what that means?” What’s the magnitude of ‘a lot of’? When I hear someone say, “Well, a lot of the times.” I think to myself, “How many times?” [chuckles]

What is ‘a lot of’ to you? Because a lot of to me and a lot of to you may not be the same thing. Think about the words that you use when you communicate and in the same way, think about the words other people are using. Those are some of the things that help clue you in on whether or not you’re even having the same conversation. When you have these things where you’re saying things that you’re not really sure of that you’re indicating that you’re sure of, just check in with yourself when you’re saying those things and check in with others, by simply not challenging, but simply asking them, “When you say ‘a lot of’, what do you mean?”

If someone says, “You’re talking about medicine.” You say, “A lot of patients really seem to have this problem where they x,” and be like, “Oh, wow, you’ve seen that clinically, how frequent has that been?” “Well, I had this one patient.” That’s it. You’d be amazed; maybe you wouldn’t be amazed, how often that’s the case. I mentioned statistics. I mentioned this thing called, Bayes’ theorem, which that’s not important to get into but another concept from Bayes’ theorem is a concept of priors and that is basically that whatever beliefs that you hold going into the conversation are your priors.

All of us have them. We all have pre-existing beliefs. Some of them are biases. There are all kinds of things. Some of it is true knowledge that you have. Some of it is knowledge that you think you have that you don’t. When you’re having a discussion with someone, it’s good to understand their priors, because then you can more easily understand their frame of reference, and you understand where they’re coming from, or at least you better understand where they’re coming from.

You’re less likely to, I guess, misinterpret something that they’re saying, particularly in a bad way. It’s really easy if you have real conversations. I know you do all the time and you’re constantly having these conversations that would otherwise be heavy, but you’re just asking because you’re genuinely curious about what the people say about these things, not because you’re trying to challenge anybody. If everybody understands everybody’s priors, it’s easy.

Ari: I’m trying to challenge people.

Dr. Sharpe: Yes.

Ari: [laughs]

Dr. Sharpe: its funny when you and I have had, when we’ve gone back and forth in social media a little bit, I’m always really happy that other people don’t chime in. I feel like, not that I don’t like other people to have opinions on things. I feel like you and I can get to what it is that you think and that I think a lot quicker and more clearly if it’s you saying what you think and me saying what I think. As soon as someone else comes in, they have a different prior.

They don’t have the history that you and I have to where we already know what needs to be said and what doesn’t need to be said. In social media, that doesn’t happen. It’s really interesting in that social media context because what you have is you have this whole spectrum of priors. For me and you, you and I have interpersonal priors. Then I’ve seen a selection of the things that you posted, but only a smattering. When I see you talk about something that would be controversial, so will bring up masks again.

I’m hesitant to jump in if you say something that I feel like might not be true because I know you’ve probably discussed this a bunch already, and I don’t want to force you to rehash everything that you’ve rehashed before. If I enter into what I know is likely to be late stage, I need to carefully craft what it is that I say. If you think back on your experience of our conversations, the way that I’ll frame what I say is very specifically contextual to the thing you just said, and the potential of veracity or grandiosity of the thing you just said. For you, we have a good rapport but we’re still learning where each other are coming from.

For you, you’re used to people not necessarily taking that approach. You might think that I’m commenting on this greater body of work and all these other things that you might have said when really; I’m restraining myself to this finite thing. If you look exactly how I wrote it, that’s clear, but most people don’t really dial in the precision of what they’re saying in that way, so it takes a couple of back and forth messages between us and then you’re like, “Ooh, you’re right.” Not I’m right but I understand what you’re saying by that.

Ari: Translate that into something that applies to everybody listening, how can they take that tack, that strategy that you’re describing, and use it productively in their life when engaging with other people?

Dr. Sharpe: Let’s see. I wanted to get into this concept of Hanlon’s razor and there’s an intersection in Hanlon’s razor. It’s not a precise fit but it’s a good way for me to segue. Hanlon’s razor is never attribute to malice what could be explained by stupidity. You see that a lot on social media and here’s the way that I guess I would try to loosely apply Hanlon’s razor is when you see something on social media that feels not right to you, there could be a lot of the conversation that has happened that you’re not privy to.

Whatever your objection is, those two people may have already worked out that objection and come to an agreement on that concept and moved on. Then when you enter, if you come in guns blazing, “Well, no, what about X?” You’re like, “We dealt with X literally three weeks ago.” Then someone else comes in a week later and they’re like, “But you didn’t think about X.” There’s this hubris of coming in as you’re the person who’s coming in and you’ve got the thing that nobody thought of and maybe you do, but you’ve got to feel it out and determine whether or not that’s actually true before you come in and drop the knowledge hammer that is really not a knowledge hammer.

Ari: You know if that’s a hammer, ignorance hubris hammer. Tim, look, tell me, should we mandate masks or not? Should we?

Dr. Sharpe: Should we? There you go. Well, the data said– No, those are tough questions. When you’re talking about masks, I mentioned signaling and the second thing that I was going to say is we have to get through the entire podcast on talking about how to think about things before we could even circle back to how would we talk about having the mask discussion.

The danger of “black or white” thinking

Ari: What’s the next concept that we should dig into? I actually, I want to interject one. This is something, I can’t say that I’ve actually learned this anywhere or maybe I did subconsciously and I forgot about it but concept of continuums or spectrums as I sometimes call them. I think it relates your percentages idea that you explained earlier but I really like grounding concepts on a continuum of where someone is landing on this continuum.

There are very simplistic examples of this of like, do you have fatigue or do you not have fatigue? Well, it’s not an on-off thing; it’s a dimmer switch type of thing. It’s like, there are many points on the spectrum that someone could be. Someone could have the energy of the average five-month-old puppy that’s just bursting with energy all day long and just going crazy and wants to play non-stop, or someone could be bedridden and debilitated and can barely get out of bed and take 20 steps without feeling exhausted from it.

Then there’s a whole broad spectrum of where 99% of people are going to land that’s in between those two points. I really like thinking of things in terms of that concept or even something in terms of this subject matter, something like conspiracy thinking. There are some people who imagine– Like, there’s conspiracy theorists and then there’s normal thinking. The reality is there’s a spectrum of people on one end of the spectrum who are inclined to be skeptical with everything and think everything is not what it seems and it’s a conspiracy.

The puppet master is at play deceiving us and really everything, as it seems, is not really what’s going on. There are things that are bubbling under the surface and evil actors manipulating everything and really the earth is not round, it’s flat. We never let it on the moon, all these kinds of things. On the other end of the spectrum, you have what I personally perceive to be just as flawed of a type of cognitive bias that most people aren’t aware of and don’t really pathologize in the same way we pathologize conspiracy thinking which is people who just trust the authorities blindly with everything.

It’s like, we know politicians lie. We know the media often lies. We know that people do actually conspire and conspiratorial acts are a real thing. In law, it’s recognized. Conspiring is a real thing in law and many, many people throughout history have been convicted of that. Yet, there’s this cognitive bias that’s like, “No, conspiracies don’t exist. I don’t acknowledge the veracity of any conspiratorial claim. Everything is as it seems.”

There’s paranoia versus trustnoia and you can cite examples of where each of those two modes of thinking, those two cognitive styles can really go wrong. Where you land in between those two spectrums and how do you operate in a way that may be is skeptical but is not so skeptical of everything that you think everything’s a conspiracy.

Dr. Sharpe: The concept of continuum I think it is interesting. Another interesting thing about a continuum is if you think of it visually, if you have a continuum and your continuum is here and you want to know where does someone fall, someone else’s continuum might be like this. I’ll give you an example. If I’m talking to a patient and I’m talking to them about bowel movements. If I’m in precise with my language and I say something like, “How is your bowel movement frequency?” They might say something like, “Good, yes, good. There are no problems there. I have bowel movements. It’s all good, no discomfort.” Like, okay.

Or if I ask the question, “How many bowel movements do you have in a day?” You get this in clinical practice, the same person that answered the previous question exactly the way I answered it might say once a week but they don’t hurt. To them, it’s not pathological. It’s just a thing. They have this continuum once a week that’s normal for them. It doesn’t seem to cause problems and for somebody who’s a practitioner is clearly not where we would put normal on the continuum.

It’s not only continuum our importance but recognizing that everybody has different concepts of them and then calibrating. Calibration is huge. That’s another one of the big concepts that I come always thinking what needs to be calibrated. You made me think of something. I had a list of things that I knew I wouldn’t get through all of but things that I thought I would be interested to talk about and you brought up conspiracy thinking.

I love the concept of conspiracy hypothesis versus conspiracy theory because the theory implies a gravitas that conspiracies typically do not have. What I wanted to bring up is people that are prone to conspiracy thinking, they do this thing. There’s a thing called Gish gallop. A Gish gallop is when you overwhelm your opponent, whoever it is that you’re talking to with so many different arguments that it’s hard for them to combat.

You just this and this and this and this and this and this and this, all of these things taking you towards an end. If we scroll back to when we very first started this conversation and I talked about taking your opinion, your certainty, and applying a percentage to it. This is the secret to getting out of the Gish gallop trap. Arguments are not additive. They’re just simply not. What you need to look at is when you’re presented with the Gallop, everybody’s saying, “Well, but this and there is this.” You could do this with a flat Earth.

I don’t engage in flat Earth stuff. I don’t know all the things that they say but I’m sure they could list off it [crosstalk]. I feel like I am. I’m sure you flat earthers. You could list 10 things that all sound really plausible or potentially plausible and just like, boom, boom, boom, boom, boom all the way through, and when you use that approach where you take and assign a percentage. If I’m like, “I’m 90% sure that the Earth is–“

I probably shouldn’t chose flat Earth because I’m more than 90% sure, but doesn’t work if I’m 100%. If I’m 90% sure that the Earth is not flat and you present me with an argument, and I’m like, “Well, I guess that’s possible.” How likely it is, isn’t greater than 90%, it’s not going to move me because I’m already 90% sure that this thing is true. All these 2% things, if I have 52% comments that doesn’t reach 100 and then flip me, they’re all 2%, I dismiss 100% of them, I dismiss all of them.

It’s really easy to get thrown by a preponderance of mediocre evidence. When you think about it that way, and you think, “Was any of this evidence actually solid, or was it just a whole bunch of mediocre?” If I’m presented with a whole bunch of mediocre evidence, say it’s all 2%, I will jump a little bit higher than 2 because they sure was a lot of it but you’re not getting a lot higher than 2% because it was all garbage. It’s just, you had much of it, “I’ll give you a 50% bump and move you to 3%.” That’s an example of a continuum, which is what made me think of it.

Ari: Are you trying to say that you’re not going to join my flat Earth club?

Dr. Sharpe: I don’t think I am. I think that’s really useful in that— I’m just going to riff on a few of these and you stop me when you need me to.

Ari: Yes, please.

Dr. Sharpe: There’s another one that I like to think about and some of these seem obvious, but as a mental habit, I think a lot of times people don’t formally do this and it’s different if you formally do it than if it’s just something that sounds like that makes sense to you. Like meditation, that makes sense to me. It’s different than meditating. There’s this concept of can this be true and must this be true? For can this be true, is it possible? One of the things that I like to do for the plausibility is I’ll apply Occam’s razor.

Occam’s razor, there’s a technical definition for it and there’s the one that we all use. Ultimately, when you’re presented with different ways that things could be so competing hypotheses or competing arguments, the one that has the fewest assumptions is the one that is more likely to be true. If you need to believe five things in order to believe that this thing is true and someone else has an argument that you only need to believe one and if you believe that one, then this thing is true, odds are that simple one is probably the one that’s true.

I can’t say certainly, but we’re all doing these constant checks is this likely true? That’s one of the ways that I’ll look to see is how complicated was that? With that Gish gallop, we’re super complicated because you don’t get straight to this is the flat Earth. You get, “Well, they’ve observed that this,” which is like this peripheral thing and people tend to have a ton of peripheral things and they think that makes it strong. That’s the first thing for can this be true?

Then the next thing that I like to think is when someone presents something, or if I’m presenting something, I ask myself, must this be true? Must this be true is with everything that’s been said, can I say that based on all of that evidence that this thing must be true? If it’s not, then what are the scenarios in which that thing might not be true? Then that helps you start breaking down, “Well, this is where that hole lies and the real advantage here– I do some experimental research as well, experimentally this is helpful because it helps you shape your hypothesis.

If you have a hypothesis that is not likely to bear fruit, then you just wasted a lot of time, six months, however much money was invested in the investigation. You need to think about all the things that could go wrong and the way you do that is your hypothesis test, like mentally hypothesis test before you even get into doing your experiments and you try to break whatever concept that you have and you take a cut and idea and you say, “Must this be true?” If it’s not, why not? Then you parse out the why nots? Why not A, then can I account for that? If you can account for it, then you shuffle it off and look at the next why not and you try to eliminate all those why nots?

Ari: This is super important. I’m glad you brought this subject up. I had made a mental note of it that I wanted to bring this up prior to the podcast, and I forgot about it. Thank you for the reminder. I think what you’re talking about is especially important because most people do the opposite. Most people have confirmation bias. Most people are looking only for evidence that fits with their priors, their previous assumptions and beliefs and knowledge about a particular topic. They filter out everything that doesn’t fit, which is really easy to do because we can form our own echo chambers and our social media challenge and our newsfeeds and all that. Then you only select out the pieces of information that fit with your narrative. I want to tell you a little, maybe something personal about me. I’m curious to get your thoughts on it. It might also apply to you or I’m curious just what your perceptions are, but I think that I’m contrarian by nature. It is very much my nature psychologically to want to poke holes and find the flaws in any information that I encounter.

One of the things that was really useful for me in my own intellectual growth, in my own ability to see things more clearly was when I started to turn that on myself. When I started to turn my natural tendency to poke holes onto my own ideas, my own theories, and my own beliefs. I started intentionally doing the opposite of confirmation bias. I started intentionally trying to prove myself wrong.

Every time I was going to put out an idea or write a book or make a video on something, it was like, “Let me bulletproof this, let me find all of the best pieces of arguments and best pieces of knowledge that could poke holes in my idea and see if I’m wrong.” Sometimes I am wrong and then I don’t ever put a video on that subject. I spend a lot of time really questioning myself and trying to argue with myself and find the flaws before I ever put any information out. I’m just curious, what are your thoughts on that landscape?

Dr. Sharpe: I think critical thinking is important for everybody. I think some people are drawn to it like a moth to flame and some people have to work a little bit harder to be convinced that it makes sense for them to devote time to. One of the tools that I use in this exact concept is the concept of the steel man, which I assume that you probably know. Almost everybody seems like they know the straw man, which is when you’re having a discussion with somebody, you might put forth an argument that is a weak version of that other person’s argument. Then you very easily SWAT it down and you’re striking down a straw man.

The steel man is sort of the polar opposite of that, where you’re trying to erect the strongest version of the person that you’re talking to’s point or the strongest version of the point that maybe you disagree with, even you can do this internally. Then you look to see, can I strike down this steel man as easily as I struck down that straw man? You look to see the places where you’re unable to break down that steel man, that means that those are legitimate arguments.

Those are valid and those need to be the things that on your continuum, again, that move you a little bit away from where you were before and more towards whatever that thing was that you thought maybe wasn’t right but now you’re like, “Since I can’t falsify this thing, I can’t feel that it’s as false as I felt like before I tried to falsify it.” I think that’s an important tool. Then just the concept of being charitable to the person that you’re arguing with or that you’re having a discussion with.

If you’re not charitable, then you end up possibly thinking that they hold beliefs about what it is that they’re thinking that they don’t actually hold. If somebody says something and it’s not 100% accurate, it could be they just misspoke. It doesn’t mean that they’re necessarily wrong. It doesn’t necessarily mean that they hold a really firm belief in that area. Just having that concept of being charitable [crosstalk].

Ari: I feel like both of these concepts tie into the dialectic as well because, well, the second one you said as far as being charitable and that goes along with like, “Hey, I don’t want to misrepresent you. I want to make sure I’m accurately representing you.” As a side note, I had the discussion literally with someone the last few days who was misrepresenting me and trying to argue against this misrepresented position.

I was literally telling her like, “I don’t feel the need to defend this position because I don’t hold the position you’re trying to attack.” I’m pointing out the exact quote. I’m like, “I do not think this.” I’m literally telling her, “I do not think this. You’re trying to attack this position. I don’t hold that.” She’s like, “No, but you do. You do.” I’m like, “No, I literally don’t. I’m telling you, I’ve never said that. I’ve never argued that and I’m telling you now, I do not hold that position.”

The idea of a steel man, it’s like, if you’re trying to get one up on the other person and win against them and humiliate them and show why they’re wrong and you’re right, then you want to straw man them. You want to misrepresent, create the weakest version of their arguments to attack that and show why that’s wrong.

If you’re in dialectic and you are actually engaging in good faith, trying to arrive at some shared truth and mutual understanding, and trying to learn from one another. Then it’s like, “Hey, I want to make sure that I’m accurately representing you and forming the strongest version of your argument before I try to poke any holes in it.

Dr. Sharpe: You know what I think that’s a great point. You want to have a discussion with the best version of the argument of the other side. Otherwise, you’re both wasting your time. There are times when you might be having a discussion with someone, and you help them develop the best version of their argument, right? Whether it ends up being the ultimate true or correct side or not, that’s a whole separate thing. When I think through all the things that I really wanted to talk about today, one of the final ones that I can think of if I can bounce around a little bit is, I find it’s really important to ask yourself, what is the outcome that I want? Whatever it is.

If I’m having a discussion about a thing, what outcome do I want to see from that? In this regard, I have a real consequentialist point of view. In consequentialism, the idea is the best approach is the thing that ends up with the best result. The correct approach is the one that results in the ultimate result. It seems like when someone hears that they think, “Oh yes, well, that’s obviously everybody is going to take that approach.” That’s not actually the case, if we scroll back to some of the more charged topics, a lot of times these concepts of justice or rightness or correcting a wrong, those are good independent value.

Having those discussions don’t always get you to the place that you really want it to go. That’s where you can ask yourself, “What do I want to come from this?” If what you want from this is, “I want the person or organization to understand that the thing that happened was wrong,” then that’s a valid approach, that’s a valid way to go about having the discussion in the way that you’re having it. If the answer to what is it that I want that happen is, “I want a result that tangibly improves my life or intangibly improves the lives of others.”

Well, this feels like the thing we should talk about, that isn’t necessarily the argument that you need to have, you need to have the argument that gets you to the destination that you want to go to. If you can’t say to yourself, “This thing that I’m talking about gets me there,” then you’re not going to get there. I don’t know if that was too abstract to– I’m trying to avoid– Some of the concepts are a little more politically charged than I [laughs] want to get into.

Ari: We’ve already covered masks. Well, partially covered masks. Now, let’s broach the topic of religion versus atheism, belief in God and vaccines, and what’s the best diet. Go. [laughter] Carnivorism or veganism, which one is it?

Masks are they good or bad for you?

Dr. Sharpe: I’ll let you bring up the controversial topics. I’ll jump when you hit on– when I’m ready to. It’s really good to have difficult conversations. I think that when you have difficult conversations, it’s really good for everyone upfront to understand what conversation, I maybe talked about this a little bit at the beginning, but to understand what conversation that you’re having because it’s really easy for two people to engage in a conversation and because the topic is along a general line like– I’m trying to think of one of the controversial topics that you brought up.

Ari: Vaccines. We’re talking about vaccines, masks, lockdowns. Those are all very controversial topics right now and highly politicized, where a lot of people are forming their opinions purely based on the media narratives of their political alliances rather than an analysis of the evidence.

Dr. Sharpe: Let’s talk about mask. I think that’s pretty easy to pull into this. [clears throat] When we think about wearing masks and not wearing masks, and let’s not get into– I know you’ve read a lot of studies, and you’ve talked to a lot of experts and you have different opinions about the efficacy. While that’s important for the concept of how we’re approaching this, that being knowledgeable isn’t where we’re going here. Its how do I think about this information? Let’s put the knowledge bit aside. In terms of how we can look at this–

Ari: Is this conversation going to end with you forcing me to wear a mask during this interview?

Dr. Sharpe: During this interview, yes, we’re going to have to go back and re-record or you can put on one of those Zoom filters where it’s like there’s ocean waves in the background and then–

Ari: [unintelligible 00:50:00] a little bit of the filters that you could just inject them.

Dr. Sharpe: I would not be surprised if they have that. The types of things that one would need to ask are is it possible that a mask would be helpful? Is it possible that a mask could be harmful? Now, you might have data on either one of those things, but you have to ask those questions. Then you have to start asking yourself things like, what are the consequences of me being wrong in one of the situations versus the other situation and then you do a cost-benefit analysis there. I fully believe that there are some people that, in most situations, it’s possible that wearing a mask could be problematic for them.

There could be someone that has specific pathology where that’s very difficult for them. Most of us are not in that situation. Then you look at the evidence, I know that you have, and then you start filling in some of those blanks, recognizing that they’re all on the continuum. We don’t have conclusive answers to, “Yes-Mask, no-Mask.” Regardless of what camp you’re in, that’s clear.

Ari: You are saying that, I just want to point out for people listening, Tim is not an anti-masker.

Dr. Sharpe: No, I’m a pro-masker.

Ari: Pro- masker, right. You are acknowledging, I’m trying to point this out because there are a lot of people who are pro-masters like you are who actually think that there is an abundance of really strong, amazing evidence and the evidence is conclusive that masks have amazing benefits and save lots of lives. What you’re saying is I know the evidence is not there at that level of certainty yet, just based on the precautionary principle and what are the consequences of airing on the wrong side of this decision. I think this is less consequential, “If I’m wrong, but I’m wearing a mask.” Is that accurate?

Dr. Sharpe: I think that’s pretty accurate. My approach is, without getting into making this a conversation specifically about masks, I look based on my understanding where it is likely that masks make the biggest difference? I make sure that I wear them in those situations. I am careful that I do not fall into the situation where I’m not wearing a mask in a place where either A, I think that it probably does help or B, I know that I have a knowledge vacuum to where I can’t say with confidence that masks would not be helpful here.

I think that second one is really important because we’re not all dealing with the same knowledge inputs, not all of us have put in the same due diligence, not all of us have medical training to understand how to read research. I certainly don’t put myself like sky-high. You’ve interviewed probably in the past month, you’ve probably interviewed three people that know a thousand times more about this than I do. It’s just we all have some level of the continuum and all that does is that adjusts me downward in terms of my confidence, which makes me adjust upwards in terms of how likely it is that I should wear a mask because I don’t know.

Someone who knows a hundred percent, someone who has perfect information can do literally exactly only what is necessary. I’m not that. For me, from a precautionary principle, like what you said, I do wear masks. I’m careful about mixing in with other people, depending on how much engagement they have with the public. I know you’ve thought a lot about this. In terms of my view on how I’ve communicated masks, first of all, I’m curious what you think, but two, I’m curious what hole you can poke or what thought you might leave me that might move me in a direction different than where I am right now. That’s important.

Ari: I totally agree with your framing of it. I think this is a good example because I think we’ve arrived at somewhat differing perspectives on it, but with similar mental models for how to operate in this landscape. I think a context for understanding the potential consequences is interesting because I think there’s some bad framing of this that’s taking place.

One example that I would consider bad framing is I saw somebody post one of my friends on Face book who’s strongly pro-mask and he posted probably earlier today or yesterday, a picture of him wearing a mask and saying, “I’m still alive. See masks aren’t harmful. It didn’t kill me to spend three hours in a mask outdoor gardening and mowing my lawn.” A few examples, we know that smoking a pack of cigarettes a day doesn’t kill you after a day of smoking a pack of cigarettes. It doesn’t even kill you after a year. It doesn’t even kill you after 10 years of smoking a pack of cigarettes every day.

One might conclude based on that, smoking a pack of cigarettes a day is perfectly harmless because I know people that have done it for 10 years and I can’t see any harm, they’re still alive. They seem to be doing just fine. There are people who don’t exercise for decades, still alive. Could we conclude being sedentary is harmless? No, we know exercise is profoundly beneficial. We know people who drink alcohol. Again, the same thing, you can do that for decades and still be alive. My point is, it is possible to do something for decades and for it to cause a subtle degree of harm that manifests in a very significant way, over a long period of time.

Another example of something that’s subtle is like lead in the plumbing in the pipes. We know that lead exposure is absolutely harmful and lowers children’s IQs and causes endocrinological damage and all kinds of health problems and yet if you are a person living in a place with lead plumbing, you will never actually observe the effects of lead plumbing. You’ll never see with your eyes and experience the harms of that subtle chronic lead poisoning.

How does this connect with masks? Well, one, we need to evaluate what the strength of the evidence in support of mask-wearing is. There are some studies that have shown a benefit in terms of decreasing risk of infection. There’s also a whole bunch of studies, including meta-analyses and systematic reviews of randomized controlled trials showing no benefit or a very small benefit. On cloth masks, there’s only like one or two studies, and one of them shows an increased risk of infection compared to no-masks when using a cloth mask, which is what most people are using.

Then you have this other layer is going back to what I was just saying, as far as what are the potential harms? Two things I’ll point to on that subject. One is, there are studies looking at healthcare professionals like nurses, for example, that wear the masks for many hours a day and incidents of headaches. Many nurses like, I think 20%, 25%, have significant amounts of headaches as a result of wearing a mask for many hours each day.

Then we should ask what’s causing that. Probably what’s causing it is our body in our bloodstream regulates oxygen and CO2 levels. What a mask does is cause much higher inhalation of carbon dioxide and potentially alters the levels of carbon dioxide in the blood, or at least stresses the body’s buffering systems to maintain the appropriate level of carbon dioxide. That can manifest significantly enough that in the span of a single day of several hours wearing a mask, you can experience a headache, which is, I don’t know what the exact physiological mechanisms of what causes pain, but something significant enough is going on in your body that you feel pain.

There’s also mentioned one other layer here, which is there’s guidelines from OSHA, which is an organization that establishes what are safe levels of carbon dioxide to breathe in indoor environments. Regulating indoor office environments, schools for kids, what are safe levels of carbon dioxide? I may get the numbers maybe a bit wrong, but it’s something like below 500 or 1,000 probably parts per million of carbon dioxide in the air is what’s considered safe. Between 1,000 to 2,000 is considered potentially hazardous. Then above 2,000 is considered hazardous.

If you have kids sitting in a school, sitting in their desks, just breathing in the ambient air of that room and it’s above 2,000 parts per million carbon dioxide, that’s considered hazardous to their health based on the established evidence. Now, as soon as you put on a mask, you can put in a CO2 meter that measures the levels of the air that you’re breathing when you’re wearing a mask, and its way above 2,000. Depending on the type of mask, it can be anywhere from 3,000 to upwards of 6,000 or 7,000. What you’re doing there, in my mind, is especially if you get on a bus for half an hour and you wear a mask, no big deal, do it. Precautionary principle weighs heavily in your favor there.

If you start telling kids we’re going to open up schools, but we need all kids to wear a mask for six or eight hours a day while they’re in the school every day, month after month, now in my mind, this does have the potential for a significant amount of harm. This is an experiment that we don’t know the long term harms of, and so where does the precautionary principle lie there? Do we favor trying to avoid the virus, or do we worry about the harms of breathing so much CO2 that we know is rated as hazardous on a daily basis, where do you land?

I’m not saying I know the answer, but what I am saying is there’s enough evidence to warrant at least some concern about the potential consequences of that and to not be so myopic in our thinking that we only consider, hey, piece of cloth blocks this many viral particles, therefore, a cloth is good, we have to consider some other layers in that.

Dr. Sharpe: I think where their’s value is in when I hear you make that argument and bring up those points, to break down where my mind goes much more important than any counter-evidence that I could bring up because that’s not what this podcast is about. Its how do we think about these things? The types of things that I think and these could all be answers that you have, and I’m happy that if you do, but those are the directions that we would go, for studies that talk about harms brought on by masks.

When I look from what those data are, where did that break down? Where did they hypothesize that these harms arose? You talked about one being carbon dioxide and that legitimate. If your data are accurate, then I think that’s a valid point to look at. Another place where there could be a problem there is in a certain level of viral load being on the mask itself. Then you touch the mask, you touch your face, you touch your eyes. At that point, what I wouldn’t do is engage, Okay.

We talked about this earlier in the podcast where you try to break things down, then you look to see what are their component parts, and how do I deal with those component parts? If you can say to yourself, “All right, well, if it’s the mask is dirty, I touched my face, then if I wear a mask and don’t touch my face,” that’s one thing than you would say, “Okay, can I not touch my face?” That’s another thing, which is harder to do than you can imagine.

We’ve all seen that video of the person talking about or she was talking about the importance of masks and not touching your face and stuff. She’s like lift your hand to turn the page of the thing that she was reading from saying that if he did not do this thing that she just literally did on camera. That was funny. Then, in terms of viral load, say, 20 minutes of mask-wearing, I’m just throwing out numbers here, 20 minutes of mask-wearing might develop a certain load of virus that we would need to be worried about, again, made up number, just for an example.

If that’s the case, then if you’re someone like me who doesn’t go out a lot, I don’t need a lot of time out of my mask. If my habit is to take the mask off, and put it in the wash immediately and not reuse that mask right away, that makes that wearing of the mask, it counters that specific issue at least partially. I would look for all of those points of intersection and see how many of them can I address.

The more that I can’t address leads me more to your position of, “I’m not sure that I want to wear masks in as many situations as maybe some other people are wearing masks just to leave it loosey-goosey.” I think that’s a reasonable approach, and I think that’s how all of us should be gauging risk. Whatever the risk is, the risk is there, whether we believe the risk or not. We’re all served by not fooling ourselves into having an ideology about the risk and believing that that ideology is going to manifest in the attenuation of that risk because that fundamentally will not happen.

Having these conversations, I think they’re useful. One of the really challenging things is, everybody can’t read all of this information. We’re left with people like you, people like some of the people that you’ve interviewed, Dr. Katz, and there are a lot of epidemiologists and professionals out there, it’s up to them to do this. Something that I think you and I both wanted to talk about, I don’t think we really got into it, we got into a little bit of dialectic is good faith.

It’s really important that the people that we’re paying attention to are acting in good faith. It’s important that they have good information themselves, but it’s equally important that they’re acting in good faith and if they’re not acting in good faith, or if we feel like they might not be acting in good faith, there’s not a lot we can do with that information. If they come out and say, “Masks don’t help, don’t wear masks.” and then come back later and say, “You know what? Masks are important.” Regardless of which of those is true, you’re not going to believe that person.

Ari: Well, I think we did have that exact issue.

Dr. Sharpe: That is precisely. That’s why I brought that up.

Ari: We have the World Health Organization, we had Anthony Fauci both saying early on that essentially, there’s no evidence to support the masks are doing anything useful. Then there was a complete reversal of positions and not only reversal, but we went so far as states mandating the wearing of masks and businesses mandating the wearing of masks in order to enter.

It’s like, “Wow. There’s been decades of research on the subject and then in the span of the two months, apparently there’s so much research that happened that completely reverse everything that’s known about the evidence around masks,” and “Hey, maybe there was, maybe there is,” some evidence that came out that’s really compelling that that just was extraordinarily unlikely in my mind knowing for me that science is often a process that takes place over years of decades of solid, well-designed studies. It’s not something that, hey, researchers do a study on overnight, and that completely reverses the 300 other studies that have been done over the last 30 years.

Dr. Sharpe: I think a lot of people jump to the conclusion and there’s actually a pretty good reason why they would jump to this conclusion because Anthony Fauci came out, and correct me if you heard differently, my understanding is he came out and said that the reason that he personally and the CDC more generally said that masks would not help is the shortage concept.

They didn’t want to have a run on masks. The policy about not wearing masks in this type of situation predates this pandemic. It was already if you look at NCDC documentation and best practices, that was already there. If they were saying in that, when there was not a pandemic not to wear masks, then you have to ask yourself, “How is it not then yes and not and yet– That thought train it doesn’t reach the station.

Ari: The thing that you’re describing where he came out later after the fact, after earlier interviews where he was saying there’s no reason for people to go around wearing a mask, and then later said, “Hey, I was just saying that so that people didn’t go buy masks so that we would have more for medical professionals.” That, to me, seems disingenuous. It’s like, “Hey, I lied, but I did it for a good reason.” How about just don’t lie. If you truly believe that, then don’t lie. Then what you should say is, if that’s the actual true position and if it taking him at his word of what he’s saying and I don’t, to be frank, I think it was disingenuous.

I think that it was an after the fact rationalization for a flip flop that’s not warranted by the evidence. If you were to believe him, its like, “Well, why did you need to lie and infantilize the whole population?” Why not just say, “Look, folks, masks are great. They help stop the spread of things and yet, it’s most important for our medical professionals on the front lines who taking care of sick people. We need to have adequate masks for them. Please, don’t go out hoarding masks for yourself, and please make sure that we have enough for the medical professionals.” Don’t lie, and then try and say, “Hey, here’s why I had to lie,” after the fact. That just seems like not a good way of doing things.

Dr. Sharpe: I have a strong intuition in that same direction but in the interest of some of the things that we’ve been talking about, to be charitable to his point. If you imagine someone in a movie theater crying out, “Fire, walk orderly to the exits.” As soon as you said the fire, people are going to go nuts and we’re here to communicate a nuanced approach to this. Some people would have listened but there would have been lines around the block at any place that sells masks.

That would have happened. I don’t want to say that I support the approach that they took but I think that that was an approach that has to be on the table. You have to consider that approach, whether you choose it or not, I don’t have all of the possible approaches in front of me to be able to make that decision. I’m definitely uncomfortable with the way that it went down.

Ari: I see the logic in what you’re saying. The issue is, it also creates consequences to do it this way because now you have a whole segment of the population. Its like, “Hey, you lied. You didn’t tell us the truth. If you’re a liar, why should we believe what you’re saying now? How do we know that what you’re saying now is the truth since you already have an established record of lying to us?” That’s why I think not lying is just a better policy as far as if you extend things out into the future consequences.

Dr. Sharpe: Well, I’m actually really glad you said that because what I was going to say is when I brought up consequentialism earlier, this is advanced consequentialism. The consequences aren’t just what is immediately in front of us, the consequences are the butterfly effect, what ripples all the way down. I know this is something that will resonate with you, the concept of whatever policies when we enact, what impact does that have on the economy? What parallel impact does that have? There’s this whole row of dominoes and you have to think very critically about all of these things to try to look and see, “Okay, if I believe this to be true, what must also be true? What might also be true?”

You line all of those dominoes up that you can think of. You don’t just line up the ones that you know to be true. Like I said, you line up the ones that even possibly could be true. You have people that they do this for a living. I’m not one of them. I’m not an epidemiologist so I’m not going to pretend to know the answer to these things. I’m not even going to pretend to know the right question to ask. There are right questions and wrong questions, and you have to ask not only questions, but you have to ask the right questions and you need to stop asking the wrong questions.

I think what we’ve seen is we’ve seen a capacity for people in charge on both sides of the aisle. I know you’re fairly apolitical. I’m pretty apolitical also. I obviously have my tendencies and my biases, but they’re not tied to a party, which I believe is probably similar to you. Both sides, both parties, everybody has had some really wrongheaded thinking and they have not had let’s think through all of the dominoes and what happens when they start falling. That’s really been a lack of leadership.

Again, not specifically political, the lack of leadership has been all across the board. It certainly has been at the top of our own country, but it’s been in places where it’s not elected officials, it’s been in a lot of different spots. I’ve heard people like Eric Weinstein talk about the idea of, as soon as this hit, what we should have done is take all of the brightest minds from the universities that got it right in the beginning that said, “Hey, this is a problem,” and that have expertise in this. You get them all in a room somewhere. You isolate them, you test them, you make sure they’re fine and you tell them, “We’ll see you in two months.” They work out whatever needs to be worked out.

Ari: I just want to add one layer to that which is, they need to have different disciplines and areas of [expertise] because what I saw happening with this, my biggest pet peeve about the way things went down with this COVID thing and our response to it is, it was myopic and as you were describing with second and third-order consequences, these other dominoes that gets set into effect when the first one or two fall, we only had people with areas of expertise. Their entire area of expertise was like the first two dominoes.

What there was a total lack of is discussion between people with that kind of expertise, the virologists, the infectious disease epidemiologist to discuss things with economists and public health experts who understand the interface of how, for example, shutting down whole economies and taking kids out of school now manifests as deaths of despair and what that does to depression and suicide and heart attacks and what it does to child abuse and spousal abuse in homes, what it does to people who are not able to feed their families because they don’t have a job anymore and the economy in the longterm of what are the effects of printing trillions of dollars.

All of these other effects that affect our lives in such massive ways. To just get one quick example of something really practical I think that hits home. If you look at the continent of Africa, over the course of the next three years, let’s say they have 5 million deaths from COVID, the virus. Which is probably 20 times more than they’re actually going to have. Let’s just say that they have 5 million, this huge number.

Dr. Sharpe: You’re saying direct deaths, contracted-COVID die?

Ari: Correct.

Dr. Sharpe: Okay.

Ari: Now there’s research showing that based on disruptions of the food supply and largely as a result of our response to COVID, there are reports saying over 130 million people, kids and families are at risk of starvation as a result of not having adequate food. I’m not saying, hey, we did things wrong. This is the worst possible approach and there’s other counter data and counterexamples of engagement here. What I am saying is, it seems like you need to consider those kinds of second and third-order consequences when you’re enacting a policy to combat something and it can’t just be, how can we minimize deaths from the virus?

Because if you enact the best policies in the world to optimize for how do we minimize deaths from the virus, it is completely possible and I would even argue probably probable in many cases, that you will end up causing more deaths from your responses than from the virus itself. You can be successful in minimizing deaths from the virus while actually causing more total harm. That, at least conceptually, should scare people and make us want to consider those kinds of second and third-order consequences.

Dr. Sharpe: This is all really interesting to talk about. One of the ideas that is really important to hold in the back of our minds when we talk about this is the idea of hindsight bias. You and I are both going to be a lot more willing to entertain some of these concepts of we locked down too hard. The argument could be, should be locked down at all or not. Regardless, we could say we locked down too hard or we came out swinging too hard. We have to look at the timeline and think, “All right, where were we then?” This matters in different ways. It matters in a different way now than it mattered them.

Some of it is how critical we are of what we did then. Some of it is what is it that we need to do now knowing what we know? We definitely need to take all of our hindsight and use that for what we do now. What we have to be careful not to do is beat ourselves up about the actions that we took before through the filter of the hindsight. I think this is one of the areas where politics specifically has a lot of trouble is, as a politician, you take a stance and once you plant that flag, you’ve planted the flag. It’s really hard to uproot that flag and move it. We open this talk by talking about how important it is to come up with where you fall in the continuum and move every time you get new evidence.

Part of what needs to happen is that the public needs to be educated on being wrong. There’s not a problem with being wrong. That’s not the problem. The problem is not updating when you’re wrong. If you were in good faith wrong, made a mistake, or not even you made a mistake, you made the best possible decision with existing data, which even though it was wrong, was the right choice. You see that in and football, you go for the extra point or whatever. The data tells you what to do, whether you score or don’t doesn’t make the decision wrong. It just means whether it came out. Percentages just work out that way.

Ari: From the fan’s perspective, they’re like “Oh, man, you’re so stupid for going for the two-point conversion.”

Dr. Sharpe: Exactly.

Ari: [unintelligible 01:18:03] just gone for the extra point.

Dr. Sharpe: Public needs to learn to not be really hard on people that make decisions that turn out not to be the best decisions. What they need to do is expect of them that they’re going to make better decisions with the new data that they have and to make their decision irrespective of whoever it is they feel like is their opponent what they believe.

Ari: Wouldn’t that be a nice world.

Dr. Sharpe: It fundamentally doesn’t matter. That’s one of the reasons why. I get that we live in a country that has politics like every other country, so you’re not going to make politics not exist. What you try to do is you try to safeguard your processes from that. We have a First Amendment for a reason, and that safeguards us against a lot of things, a lot of bad actors that might come in. This actually is going to be a First Amendment talk. What I’m getting at here is what you can do if you know that you have a situation that can be problematic.

You have side A and side B, and they’re going to clash and you have something you have to get right. You know if side A takes us takes side A, side B is going to take the other side, that’s just how it works. You remove yourself from the situation by doing what we talked about earlier, and what Eric Weinstein talked about where you get those people together and you just let them decide what they’re going to decide. They are not political actors, they’re not puppets. They are experts, and you let the experts be experts and when they come back, and they tell you the thing, doesn’t matter how you feel about the thing.

This is not about feelings. This is about there is a pandemic, there are consequences to actions and there are consequences to inactions. You choose a course and you plot it based on the best minds available to figure that out. When they’re right, you do more of that right thing, and when they’re wrong, you do less of that wrong thing. That’s really what all of this is about, is doing more of the right stuff, less of the wrong stuff, not beating yourself or others up about it, and then wash, rinse repeat.

Ari: Yes, absolutely. Tim, well, I feel I could talk to you all day about this stuff. I’ve really enjoyed this conversation. I think this is a lot for people to take in, and to know what to do with on a practical level in their day-to-day life. If you could give people two or three big takeaways of strategies or mental models to apply in their life on a daily basis to better deal with the wars of information that are taking place right now, the wars for their mind, these mind viruses.

Also, to have more productive conversations with other people, especially people across the divide, across the chasm of politics or who are on the opposite side of the issue for you. How can more people have more productive conversations instead of just furthering the divide in the chasm and calling each other idiots and so on?

Dr. Sharpe: It’s a good question. In my head before this, this was all a lot more practical. These concepts, the more you explain them, the more complicated they become. I think the simplest approach would be that concept of what do I think I know and why do I think I know it, and applying that to the people that you’re you’re interacting with. What what do I think they know, and why do I think they know it?

In as much as you can do so without creating conflict, ask them, there are ways to ask somebody, “Oh, that’s interesting. How is it that you came to believe that?” Not telling them that they’re wrong or you don’t have the same opinion, just, “That’s interesting. How is it that you came to believe that?” Then there’s a really cool tool that they use in an improvisational comedy called yes-end, and I think this is a really good tool to use in conjunction with the how’d you come to believe that is when someone says, whatever it is, that wasn’t the only reason, they have other reasons.

If you say, “Oh, yes, what other reasons that led you to believe that?” Just try to try to figure out what brought them to the totality of their position. What you’ll find is, at some point, you’ll see they’re out of ye-ends. Then you can look and see, they’ve just told me how they came to believe that. Does that change my own model of what I believe about the thing that they believe?

It’s not necessary for you to engage them in that. You can, but that’s not the necessary part, the necessary part is for us to do our own updating. Then you can freshly engage with them based on understanding where their perspective lies, what their priors are, and you already know your priors, and then you make that decision, is this engagement worth having?

Ari: 100%, yes. Any other final thoughts that you want to leave people with? Maybe one more big takeaway,

Dr. Sharpe: Because things can be so contentious. I would say the other takeaway is to be charitable to the views that other people present, and when you see something that doesn’t match up with your worldview, try to think, why, or how might someone have come to believe that. It may not draw you to believe the same thing that you believed, but you might see that that person’s experience led them to feel that way and whether they’re correct about the conclusion that they drew.

It may be perfectly reasonable, that they’re drawing that conclusion based on the experiences that they’ve had. I mean, we all rehash our life’s experiences over and over in our conversations and our relationships, and we can’t separate ourselves from that. When you’re charitable to someone else’s point of view, it allows you to be less judgmental about where they’re coming from and what they’re saying.

Again, when you understand their experience, you can use that as one more piece of knowledge about whether or not that’s something that you need to engage in. If it’s someone who has a very personal experience, then maybe if the point of engagement would be some rationality or proof that maybe isn’t the conversation that you need to have, because that’s not what led that person there. If that person didn’t get to their destination from that route, then that is not going to be the method that’s going to take them to a different place. You’re not going to rationalize someone out of an emotionally held position.

Ari: I feel compelled to add one layer to that, one potential caveat, which is, so I’m going to use your words yes-end because I agree with everything you said. I do think there is one potential hazard of that which is you can form echo chambers. You can start to make those kinds of judgments about these are not the kinds of people I want to engage with purely based on, hey, I’m a liberal and they’re a conservative, they are Trump supporter and so I’m not even engaged, this is not worth my time or I’m pro-mask and they’re anti-mask, therefore, they’re selfish idiots and they can’t possibly have anything to say.

Part of this comes with being charitable, but I think that, for me, the other element here that’s important in order just to not form echo chambers and to actually bridge some gaps, and so that people on both sides can learn, and have more productive conversations is actually to seek out conversations with people who hold different views than you. Seek out people who represent the most intelligent and sophisticated version of the argument that is against your position, but who are also good faith actors, people who are polite and respectful, and kind, and want to engage in a dialectic, granted may not be a whole lot of people like that, but I wish there were more.

Try to find those kinds of people in your life and intentionally seek out engagement with them for the purpose of growth so that– Seek out someone who can help focus in your ideas so that you can evolve to a more sophisticated, and knowledgeable, and intelligence view on that subject. That would be just a one-piece I wanted to add to what you said.

Dr. Sharpe: I agree with what you said. I think it’s actually compatible with what I said because I think ultimately, when people are on different sides of the concept, what brought them to those sides is going to be different, we’re going to have different inputs to why we believe what we believe. The point I was making before is the nature of the inputs that brought someone to their conclusion is one of the pieces of information that you’ll use about whether or not it is in your best interest to spend time trying to engage with that because you can’t out-logic someone that has a firmly held belief.

I can talk to my mom who is a devout Catholic. I couldn’t come in and present her with 30 facts about her idea of an historical Jesus, that were profoundly demonstrably not true, and have her shift literally an inch on that spectrum. It’s just not going to happen because that’s not how she got there and that’s not what’s important to her core belief. It wouldn’t make sense for me to have that conversation with her because literally it’s going to go nowhere.

Understanding where people come from and going back to the prior is understanding how someone got to where they are, gives you more input on where do we go from here. I agree with what you said and I think that it’s very important to have a conversation. I brought up Eric Weinstein. I just saw, he just did a podcast with Ted Cruz who is ideologically not polar opposite, but probably pretty close to polar opposite. It was fantastic. I don’t know that either one of them moved a lot, but what they did was, they filled out the ignorance. Any points of ignorance that one of them had about the other one’s belief now is filled in.

Ari: The followers of each of those people too.

Dr. Sharpe: Exactly. If they didn’t move an inch, even with good faith dialogue, the people that were unsure with that information, they were able to move whether it was left or right, or up or down, whatever direction they moved, they were able to move that direction because of that good faith conversation and because they were having the conversations that needed to be had and by truly really focusing on that. Yes, I agree with you, seeking out tough conversations with people that hold views that are different than yours can be really important especially if your goal isn’t to be right. That’s just a bonus when it happens. [laughs]

Ari: Absolutely. Well said, my friend. Well, this is an absolute pleasure as always it is chatting with you. Thank you for being a good friend to me on a personal note and helping me do all of the things that we’ve talked about on this podcast, and being one of these people in my life that I can engage with them even where we don’t agree sometimes. I think we agree on probably most things or a lot of things, but there are times when we don’t agree and we’re always able to have exactly these kinds of discussions that certainly helped me evolve in my thinking. I’m grateful to you for that.

I look forward to having you on again. I know we’re going to probably talk about gut health soon which is topic of your expertise among many of your areas of expertise. I look forward to that and then I want to just mention, again to people listening, Tim is starting a new blog and a podcast which is going to be excellent. It’s going to be one of my podcasts that I’m regularly tuning into which is called Reason Chasing. Check it out and Tim, thank you so much for coming on the show.

Dr. Sharpe: Great. Thanks, Ari. It was wonderful.

Show Notes

The concept “viruses of the mind” explained (07:30)
The mind is like a computer (12:30)
How culture may shape your way of thinking (15:11)
What do you think you know, and why do you think you know it?  (23:46)
The danger of “black or white” thinking (31:24)
Masks are they good or bad for you? (51:21)

Recommended Podcasts

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Leave a comment

Scroll to Top