Oct 2, 2022·edited Jan 29, 2023Liked by Michael Huemer
I think this presents a false dichotomy between evaluating every claim from the ground up and not using your thoughts about the issue to evaluate alternatives.
I don't think anyone is suggesting that anyone who wants to make a decision about a medical diagnosis should start from scratch and evaluate every argument afresh. However, the usual case is that there are multiple positions, often both defended by experts, and one can usually find presentations of the disagreement between them where you can at least compare the arguments they make to some degree.
I'd argue that one should think of it like a probabilistic proof verification. You can't hope to evaluate the complete arguments presented by each side by you can look at selections of the arguments they've selected as the most convincing and check how strongly various steps support the next intermediate conclusion. If one side's arguments seem to have weaker links when selected at random (assuming correction for length of the arguments) that's good statistical evidence for them having the weaker overall claim.
Also, when that fails, I'd argue that comparing how the various camps have done on prior predictions counts as "thinking for yourself".
Besides, if you don't think for yourself what other option do you have to decide between competing expert views? If have to decide whether to get a surgery and some docs are big proponents and others detractors and both sides are likely quite credible (none of the easy ways you can discredit global warming deniers). So, other than at least getting a sense of what the arguments are and why they hold them and then trying to analyze some of the links in their evidentiary chains what else can you do? Flip a coin?
I think Peter Vickers' book, (Oxford Academic Press, Oct 20 2022), called Identifying Future Proof Science, attempts to provide people with some direction on how to engage in what I take you to mean here, Peter, concerning probablistic proof verifications. I watched a recent discussion he had with a sociologist about how to make sense of or 'think critically' about competing claims by experts when one cannot access primary source materials (or experimental data). I've not read the book but I'm hoping it might be a work I could recommend to the average person.
I think that many people default to the heuristic "Defer to the person with high status," even though that person may not be an expert. A famous athlete appearing in a commercial for a financial service, for example. "Think for yourself" may be an admonition against the heuristic of believing the (non-expert) high-status individual, such as your favorite politician.
Critical thinking and relying on the expertise of others are not mutually exclusive. In fact, the former, if done well, entails the latter.
If I judge someone to be an expert on a topic, then his statement on that topic is one piece of evidence which I should take into account. The larger the expertise differential between he and I, and the more trustworthy I think he is, the stronger the piece of evidence.
However, that should be weighed against the other evidence I have.
Oct 1, 2022·edited Sep 4, 2023Liked by Michael Huemer
This post presents much more clearly something I've been thinking a lot about lately.
I've tried teaching precisely this point in my Critical Thinking classes, but it's a hard sell. I'll have us discuss at length the fact that, for any question where expertise is relevant, it's way better to trust an actual expert than your own uneducated sense of "what makes sense." I tell stories and read quotes from conspiracy theorists thinking they're smart and rational for insisting on "thinking for themselves," about, e.g., chemtrails, despite a notable lack of degrees in atmospheric science. The students nod and agree. But the next day, they'll just blithely go on repeating how important it is to think for yourself; it doesn't seem to get internalized most of the time.
I think our individualistic culture has just really drilled into people the value of thinking for themselves. We hold anyone who doesn't think for themselves in contempt.
And of course, on the other side, there are people who *do* say we should believe the experts, but by "experts" they really just mean "crazed and dishonest ideologues who happen to have gained a lot of social power just right now." So there's no winning really.
Thank you for posting on this fascinating topic. I think you've hit on a very critical matter here, the vital difference between individuals reasoning independently and people reasoning together.
Despite its obvious value in some situations I am turning over two related arguments against making individualistic critical thinking broadly normative: (1) the way we define arguments focuses on content and ignores critical processes, and (2) the way we assess arguments ignores the properties of the agents in order to avoid the "ad hominem" fallacy, but we tend to take that too far in individualistic critical thinking. I propose that these issues arise in part because we are not just talking about being responsible epistemically, as if we have a great deal of control over what we believe, which I suggest is debatable. When we talk about "critical thinking" we are also unavoidably talking about how we reason together.
(1) ironically, and inconsistently, the tradition of critical thinking focuses on getting claims correct independently of how they were arrived at. All the emphasis is on the structure of arguments and logic and trying to assess the merits of claims based on the supporting logic of the claim works very well for simplistic school examples but when evaluating situations we encounter in daily life we need to take the social context into account: the participants, the judges, the juries, the advocates, the audience. In order to determine whether an argument is soundly satisfying those aspects cannot be ignored. They are an essential part of how we arrive at an understanding of the argument.
(2) taking the properties of agents into account in sincere inquiry without merely arguing "against the person" as a rhetorical strategy is not a trivial matter but I think is an essential one.
I find some sensible support of these ideas in the past decade or so of virtue argumentation theory (e.g. representative articles by Andrew Aberdein).
The wordplay in "critical thinking" is interesting. It seems many critical thinking proponents incorrectly substitute the "criticism" definition of "critical" for the "important" definition, at least subconsciously.
Do you think critical thinking would be less staunchly defended if it had always been called "criticism thinking" or "Cartesian thinking" instead? This slight change would still correctly describe this type of thinking as based in questioning everything, but it would force the phrase to lose its subconscious association with importance.
The problem with your solution a is that figuring out what experts to trust requires the same sort of skills as the solution c that you reject. You can't do it by credentials.
I expect almost all of the authors of the _Nature_ article have PhD's. They are also in an environment where it is in their professional interest to reach some conclusions and not others. You will note that the statistical critique of Michael Mann's hockey-stick work was produced not by climatologists but by statisticians working in an unrelated field.
The only way to do your a is to read enough of the literature, analyze enough of the arguments, to figure out who you can trust. Short of that you fall back on your solution b, as most people should do most of the time.
Your approach might work for an issue that isn't affected by political or ideological partisanship but most of the interesting issues are.
I had a recent post on essentially the same question, with a different conclusion:
I know that experts have a lot more domain knowledge than I do. I also suspect that many are smarter than me, if for no other reason than the average IQ test results for PHD holders.
This doesn't tell me anything about how well a given expert's values and incentives are aligned with my own. Perhaps they are lying to me for (what they feel to be) my own good, or maybe they are captured by an ideology which is hostile to me or my ideology, or possibly they are working within a perverse ecosystem where bad (for me) behavior is rewarded.
The way to avoid these issues is for experts to build institutional trust - transparency around values and incentive structures, consistency in messaging & acknowledgement of errors, plus plain old 'not being wrong', at least not too often. I think it's widely recognized that there is an ongoing a crisis in this regard.
In the absence of institutional trust, a) is out, and b) is frequently unworkable (radical skepticism aside, you need to either get vaccinated or not) so c) is all you have left, so you do your sophomoric critical thinking as best you can and muddle through.
A great expression, which I had not heard, and which is entirely applicable to this point. I don't assume that experts in chemistry or physics have some systematic bias (though they surely do, but I don't have enough depth in those fields to be aware of the nature of their biases). But in the social sciences, and in politicized STEM fields, your expression is apt.
I will take a different view here, and reject (a) as a rational choice, because: 1) a conviction based on trust in the opinions of others violates the principle of sufficient reason, and 2) it typically involves a category mistake: conflates technical advice with moral advice and thus purports (falsely) to delegate moral authority for one’s choices to others.
1.
Every individual is at the risk of violating the principle of sufficient reason, but by uncritically accepting the judgement of experts one would be violating the principle of sufficient reasons in every case. The principle of sufficient is a derivative of the law of non-contradiction.
The principle of sufficient reason can be expressed as follows: for every fact F, there must be a sufficient reason that F is true. It follows that knowledge that F is true obtains only in virtue of knowledge of a sufficient reason that F is true. A sufficient reason is such that it precludes the possibility that F is false. Crucially, the knowledge that ‘the probability that F is true’ is not logically equivalent to the knowledge that ‘F is true’. It is irrational to believe that F is true without possessing the knowledge that F is true.
Let P signify the knowledge that F is true. The law of non-contradiction: P cannot be true and false at the same time and in the same respect, or ¬(P ∧ ¬P). To assert that ‘P is true’ without a sufficient reason implies that ‘P is true without a sufficient reason’, therefore any claim can be true without sufficient reason, therefore the negation of P can also be true without sufficient reason, therefore contradiction. In short, any assertion of fact that violates the principle of sufficient reason is self-negating.
A simple way out of this bind would be to accept the judgement of experts as a working hypothesis, provided there are no moral objections to trying it out, but not accept it as ‘therefore’ true or right.
2.
Expert advice is not merely about value-neutral, technical facts (what ‘is’) but is also normative in the practical sense (what one ‘ought’ to do). So when public experts ‘give advice’ they are persuading you to act in a particular way, telling you what ‘you ought to do’, and they thus also make an implicit moral judgment and give moral advice (often without being aware of doing so) or make a judgment about ‘your best interest’ (a subject-matter they are typically not experts on).
Let us say that expert technical advice is intended to persuade you to act in a particular way, and you uncritically accept it on technical or utilitarian (risk vs. benefit) grounds. The question still remains whether following such (technically true) advice would be morally right. Whether the advice of experts is morally wrong or right is typically outside the domain of their expertise, and yet this distinction alone can disqualify their advise.
But let us consider a stronger case for expert advice, where the technical expert is also an expert in moral philosophy. Even in this case it would be immoral to delegate moral judgment to an expert on morality, because such a delegation purports to abrogate our own moral responsibility. Moral responsibility of an agent cannot be delegated, it applies to the agent’s every action, including the act of agreeing to act on expert advice, in which case one would be freely agreeing to do something morally wrong, because experts are sometimes morally wrong (see the Milgram experiment https://en.wikipedia.org/wiki/Milgram_experiment). Implicitly agreeing and intending to do something morally wrong is itself morally wrong. In short, experts must not ever be trusted, at the very least, on moral grounds.
There is another reason why the advice of experts must not be trusted. Experts in a particular field can agree that misleading the public is the rational thing to do in the interest of humanity. For example, the experts may have reached moral consensus that anyone who would delegate their moral authority to experts is inherently immoral, practicing renunciation of moral agency, therefore inherently harmful and implicitly negating their own moral status, therefore ought to be exterminated, and so the experts may decide to advice the majority with false or harmful information in order to accomplish the demise of anyone who would be morally defincient in this way.
This post seems to miss a key issue. A my case where experts disagree but you need to act based on concluding one way or another you will either be using critical thinking or something else to decide which way to go. What is going to be better?
Where you don’t have to act why waste time on any analysis except as a pass time in which case again what is going to serve you better?
Would I not be better off forming an opinion by myself based on available data and only then consulting experts? I am capable of synthesizing data, although some data is going to be obscured from my view, and other data will be skewed to the expert's slant on an issue. I would be more fully informed to know how I feel about an issue without knowing whether I am correct about the factual basis for feeling that way, my view could further be appended by someone who knows more.
If I start out by accepting the expert's view it seems less likely that I would challenge it, or that said view would challenge me.
Is "critical thiking" really taught as "coming to conclusions yourself, regardless of expert opinion?" I guess if you could show the curriculum where it says things like this that would be one thing, but I have never really gotten the impression that that is what is meant by critical thinking.
1) Inputting expert opinion is one very important part of "critical thinking." A critical thinking class would make all the same points you have made in this article.
2) I have always had the impression that "critical thinking" is taught as what you are supposed to apply when you see things like the following:
a) Ads on youtube about fitness which promise Miraculous Results That t=The Experts d=Don't Want You To Know About by taking a body type quiz
b) Supplement gurus
c) Lose 10 lbs of belly fat with this one weird trick
d) A journalist with no credentials in an area laying out a flimsy argument
e) Spotting how politicians use fallacies and rhetoric in speeches
The individually rational choice, in the typical case such as your "abortion, gun control, global warming," is to find experts who support the conclusion popular with the people around you and believe them. Your belief, after all, will have no significant effect on how the issue gets dealt with but a significant effect on how you interact with the people who matter to you. Dan Kahan, as you may know, has offered empirical evidence that that is how people behave, specifically that the more intellectually able you are the more likely to agree with your group's views, whether that means believing in evolution or not believing in evolution.
If you ask instead how it is in the general interest for you to behave, the problem with your answer is that the more people follow the "believe the experts" rule, the greater the incentive for partisans to try to control who counts as an expert — avoid funding research or giving promotions or publishing articles by people on the wrong side of a controversy. The result is to corrupt the scientific enterprise.
I think the big difference between medicine versus politics, which explains why critical thinking is appropriate for the latter but not the former, is is-ought. Medicine is an "is" discipline (we take it as a given that the goal is to keep the patient healthy and/or alive); politics is an "ought" discipline (people differ very strongly when asked what a good society looks like). And expertise cannot help with "ought" questions. Of course, if society had a settled view on what a good society looked like, then it might be appropriate appropriate collectively defer to experts on how to achieve one in practice. But obviously that's a scenario that will never happen.
I think this presents a false dichotomy between evaluating every claim from the ground up and not using your thoughts about the issue to evaluate alternatives.
I don't think anyone is suggesting that anyone who wants to make a decision about a medical diagnosis should start from scratch and evaluate every argument afresh. However, the usual case is that there are multiple positions, often both defended by experts, and one can usually find presentations of the disagreement between them where you can at least compare the arguments they make to some degree.
I'd argue that one should think of it like a probabilistic proof verification. You can't hope to evaluate the complete arguments presented by each side by you can look at selections of the arguments they've selected as the most convincing and check how strongly various steps support the next intermediate conclusion. If one side's arguments seem to have weaker links when selected at random (assuming correction for length of the arguments) that's good statistical evidence for them having the weaker overall claim.
Also, when that fails, I'd argue that comparing how the various camps have done on prior predictions counts as "thinking for yourself".
Besides, if you don't think for yourself what other option do you have to decide between competing expert views? If have to decide whether to get a surgery and some docs are big proponents and others detractors and both sides are likely quite credible (none of the easy ways you can discredit global warming deniers). So, other than at least getting a sense of what the arguments are and why they hold them and then trying to analyze some of the links in their evidentiary chains what else can you do? Flip a coin?
I think Peter Vickers' book, (Oxford Academic Press, Oct 20 2022), called Identifying Future Proof Science, attempts to provide people with some direction on how to engage in what I take you to mean here, Peter, concerning probablistic proof verifications. I watched a recent discussion he had with a sociologist about how to make sense of or 'think critically' about competing claims by experts when one cannot access primary source materials (or experimental data). I've not read the book but I'm hoping it might be a work I could recommend to the average person.
I think that many people default to the heuristic "Defer to the person with high status," even though that person may not be an expert. A famous athlete appearing in a commercial for a financial service, for example. "Think for yourself" may be an admonition against the heuristic of believing the (non-expert) high-status individual, such as your favorite politician.
Critical thinking and relying on the expertise of others are not mutually exclusive. In fact, the former, if done well, entails the latter.
If I judge someone to be an expert on a topic, then his statement on that topic is one piece of evidence which I should take into account. The larger the expertise differential between he and I, and the more trustworthy I think he is, the stronger the piece of evidence.
However, that should be weighed against the other evidence I have.
This post presents much more clearly something I've been thinking a lot about lately.
I've tried teaching precisely this point in my Critical Thinking classes, but it's a hard sell. I'll have us discuss at length the fact that, for any question where expertise is relevant, it's way better to trust an actual expert than your own uneducated sense of "what makes sense." I tell stories and read quotes from conspiracy theorists thinking they're smart and rational for insisting on "thinking for themselves," about, e.g., chemtrails, despite a notable lack of degrees in atmospheric science. The students nod and agree. But the next day, they'll just blithely go on repeating how important it is to think for yourself; it doesn't seem to get internalized most of the time.
I think our individualistic culture has just really drilled into people the value of thinking for themselves. We hold anyone who doesn't think for themselves in contempt.
And of course, on the other side, there are people who *do* say we should believe the experts, but by "experts" they really just mean "crazed and dishonest ideologues who happen to have gained a lot of social power just right now." So there's no winning really.
Thank you for posting on this fascinating topic. I think you've hit on a very critical matter here, the vital difference between individuals reasoning independently and people reasoning together.
Despite its obvious value in some situations I am turning over two related arguments against making individualistic critical thinking broadly normative: (1) the way we define arguments focuses on content and ignores critical processes, and (2) the way we assess arguments ignores the properties of the agents in order to avoid the "ad hominem" fallacy, but we tend to take that too far in individualistic critical thinking. I propose that these issues arise in part because we are not just talking about being responsible epistemically, as if we have a great deal of control over what we believe, which I suggest is debatable. When we talk about "critical thinking" we are also unavoidably talking about how we reason together.
(1) ironically, and inconsistently, the tradition of critical thinking focuses on getting claims correct independently of how they were arrived at. All the emphasis is on the structure of arguments and logic and trying to assess the merits of claims based on the supporting logic of the claim works very well for simplistic school examples but when evaluating situations we encounter in daily life we need to take the social context into account: the participants, the judges, the juries, the advocates, the audience. In order to determine whether an argument is soundly satisfying those aspects cannot be ignored. They are an essential part of how we arrive at an understanding of the argument.
(2) taking the properties of agents into account in sincere inquiry without merely arguing "against the person" as a rhetorical strategy is not a trivial matter but I think is an essential one.
I find some sensible support of these ideas in the past decade or so of virtue argumentation theory (e.g. representative articles by Andrew Aberdein).
The wordplay in "critical thinking" is interesting. It seems many critical thinking proponents incorrectly substitute the "criticism" definition of "critical" for the "important" definition, at least subconsciously.
Do you think critical thinking would be less staunchly defended if it had always been called "criticism thinking" or "Cartesian thinking" instead? This slight change would still correctly describe this type of thinking as based in questioning everything, but it would force the phrase to lose its subconscious association with importance.
The problem with your solution a is that figuring out what experts to trust requires the same sort of skills as the solution c that you reject. You can't do it by credentials.
I've been looking at climate issues for a long time and can offer you an example of an elementary textbook in its third edition (https://daviddfriedman.substack.com/p/a-climate-science-textbook) and an article published in _Nature_ with a long list of authors (https://daviddfriedman.substack.com/p/critique-of-comprehensive-evidence) both of which I think I can convince you show that their authors cannot be trusted, know what conclusion they want and are willing to misrepresent the evidence to get it.
I expect almost all of the authors of the _Nature_ article have PhD's. They are also in an environment where it is in their professional interest to reach some conclusions and not others. You will note that the statistical critique of Michael Mann's hockey-stick work was produced not by climatologists but by statisticians working in an unrelated field.
The only way to do your a is to read enough of the literature, analyze enough of the arguments, to figure out who you can trust. Short of that you fall back on your solution b, as most people should do most of the time.
Your approach might work for an issue that isn't affected by political or ideological partisanship but most of the interesting issues are.
I had a recent post on essentially the same question, with a different conclusion:
https://daviddfriedman.substack.com/p/how-to-learn-what-is-true
"the experts typically have greater than average intelligence"
So do I.
I know that experts have a lot more domain knowledge than I do. I also suspect that many are smarter than me, if for no other reason than the average IQ test results for PHD holders.
This doesn't tell me anything about how well a given expert's values and incentives are aligned with my own. Perhaps they are lying to me for (what they feel to be) my own good, or maybe they are captured by an ideology which is hostile to me or my ideology, or possibly they are working within a perverse ecosystem where bad (for me) behavior is rewarded.
The way to avoid these issues is for experts to build institutional trust - transparency around values and incentive structures, consistency in messaging & acknowledgement of errors, plus plain old 'not being wrong', at least not too often. I think it's widely recognized that there is an ongoing a crisis in this regard.
In the absence of institutional trust, a) is out, and b) is frequently unworkable (radical skepticism aside, you need to either get vaccinated or not) so c) is all you have left, so you do your sophomoric critical thinking as best you can and muddle through.
“Another might be when experts have some systematic bias.”
The exception that swallows the rule.
A great expression, which I had not heard, and which is entirely applicable to this point. I don't assume that experts in chemistry or physics have some systematic bias (though they surely do, but I don't have enough depth in those fields to be aware of the nature of their biases). But in the social sciences, and in politicized STEM fields, your expression is apt.
I will take a different view here, and reject (a) as a rational choice, because: 1) a conviction based on trust in the opinions of others violates the principle of sufficient reason, and 2) it typically involves a category mistake: conflates technical advice with moral advice and thus purports (falsely) to delegate moral authority for one’s choices to others.
1.
Every individual is at the risk of violating the principle of sufficient reason, but by uncritically accepting the judgement of experts one would be violating the principle of sufficient reasons in every case. The principle of sufficient is a derivative of the law of non-contradiction.
The principle of sufficient reason can be expressed as follows: for every fact F, there must be a sufficient reason that F is true. It follows that knowledge that F is true obtains only in virtue of knowledge of a sufficient reason that F is true. A sufficient reason is such that it precludes the possibility that F is false. Crucially, the knowledge that ‘the probability that F is true’ is not logically equivalent to the knowledge that ‘F is true’. It is irrational to believe that F is true without possessing the knowledge that F is true.
Let P signify the knowledge that F is true. The law of non-contradiction: P cannot be true and false at the same time and in the same respect, or ¬(P ∧ ¬P). To assert that ‘P is true’ without a sufficient reason implies that ‘P is true without a sufficient reason’, therefore any claim can be true without sufficient reason, therefore the negation of P can also be true without sufficient reason, therefore contradiction. In short, any assertion of fact that violates the principle of sufficient reason is self-negating.
A simple way out of this bind would be to accept the judgement of experts as a working hypothesis, provided there are no moral objections to trying it out, but not accept it as ‘therefore’ true or right.
2.
Expert advice is not merely about value-neutral, technical facts (what ‘is’) but is also normative in the practical sense (what one ‘ought’ to do). So when public experts ‘give advice’ they are persuading you to act in a particular way, telling you what ‘you ought to do’, and they thus also make an implicit moral judgment and give moral advice (often without being aware of doing so) or make a judgment about ‘your best interest’ (a subject-matter they are typically not experts on).
Let us say that expert technical advice is intended to persuade you to act in a particular way, and you uncritically accept it on technical or utilitarian (risk vs. benefit) grounds. The question still remains whether following such (technically true) advice would be morally right. Whether the advice of experts is morally wrong or right is typically outside the domain of their expertise, and yet this distinction alone can disqualify their advise.
But let us consider a stronger case for expert advice, where the technical expert is also an expert in moral philosophy. Even in this case it would be immoral to delegate moral judgment to an expert on morality, because such a delegation purports to abrogate our own moral responsibility. Moral responsibility of an agent cannot be delegated, it applies to the agent’s every action, including the act of agreeing to act on expert advice, in which case one would be freely agreeing to do something morally wrong, because experts are sometimes morally wrong (see the Milgram experiment https://en.wikipedia.org/wiki/Milgram_experiment). Implicitly agreeing and intending to do something morally wrong is itself morally wrong. In short, experts must not ever be trusted, at the very least, on moral grounds.
There is another reason why the advice of experts must not be trusted. Experts in a particular field can agree that misleading the public is the rational thing to do in the interest of humanity. For example, the experts may have reached moral consensus that anyone who would delegate their moral authority to experts is inherently immoral, practicing renunciation of moral agency, therefore inherently harmful and implicitly negating their own moral status, therefore ought to be exterminated, and so the experts may decide to advice the majority with false or harmful information in order to accomplish the demise of anyone who would be morally defincient in this way.
This post seems to miss a key issue. A my case where experts disagree but you need to act based on concluding one way or another you will either be using critical thinking or something else to decide which way to go. What is going to be better?
Where you don’t have to act why waste time on any analysis except as a pass time in which case again what is going to serve you better?
Would I not be better off forming an opinion by myself based on available data and only then consulting experts? I am capable of synthesizing data, although some data is going to be obscured from my view, and other data will be skewed to the expert's slant on an issue. I would be more fully informed to know how I feel about an issue without knowing whether I am correct about the factual basis for feeling that way, my view could further be appended by someone who knows more.
If I start out by accepting the expert's view it seems less likely that I would challenge it, or that said view would challenge me.
Is "critical thiking" really taught as "coming to conclusions yourself, regardless of expert opinion?" I guess if you could show the curriculum where it says things like this that would be one thing, but I have never really gotten the impression that that is what is meant by critical thinking.
1) Inputting expert opinion is one very important part of "critical thinking." A critical thinking class would make all the same points you have made in this article.
2) I have always had the impression that "critical thinking" is taught as what you are supposed to apply when you see things like the following:
a) Ads on youtube about fitness which promise Miraculous Results That t=The Experts d=Don't Want You To Know About by taking a body type quiz
b) Supplement gurus
c) Lose 10 lbs of belly fat with this one weird trick
d) A journalist with no credentials in an area laying out a flimsy argument
e) Spotting how politicians use fallacies and rhetoric in speeches
f) Politicians again
g) etc.
I really liked this. It clarified something that should have already been clear to me, but wasn't.
The individually rational choice, in the typical case such as your "abortion, gun control, global warming," is to find experts who support the conclusion popular with the people around you and believe them. Your belief, after all, will have no significant effect on how the issue gets dealt with but a significant effect on how you interact with the people who matter to you. Dan Kahan, as you may know, has offered empirical evidence that that is how people behave, specifically that the more intellectually able you are the more likely to agree with your group's views, whether that means believing in evolution or not believing in evolution.
If you ask instead how it is in the general interest for you to behave, the problem with your answer is that the more people follow the "believe the experts" rule, the greater the incentive for partisans to try to control who counts as an expert — avoid funding research or giving promotions or publishing articles by people on the wrong side of a controversy. The result is to corrupt the scientific enterprise.
I think the big difference between medicine versus politics, which explains why critical thinking is appropriate for the latter but not the former, is is-ought. Medicine is an "is" discipline (we take it as a given that the goal is to keep the patient healthy and/or alive); politics is an "ought" discipline (people differ very strongly when asked what a good society looks like). And expertise cannot help with "ought" questions. Of course, if society had a settled view on what a good society looked like, then it might be appropriate appropriate collectively defer to experts on how to achieve one in practice. But obviously that's a scenario that will never happen.