Jean Twenge also wrote an entire book that, as far as I can tell from online reviews, decides to attribute a bunch of societal shifts to generations, which are basically randomly chosen 20-year-or-so periods. This seems like it obviously has to be bogus pop psychology? People don't suddenly become a new person because the year is different- all these variables should be changing continuously. I haven't actually read the book but now I am tempted to because it so predictably seems bad and maybe it would be fun to rip it apart?
So yeah, based on this evidence I feel pretty confident saying that Twenge has low scholarly standards.
Timely reminder to take ALL studies with a grain of salt. There are always more perspectives that could be relevant but even provided citations are selected to support the opinion being argued.
Nicolas, as someone with a hobby of reading the psychological literature I predict if you keep at it you would be writing many letters of concern to editors
Your points are all good, but plenty of users are making claims on social media based on just an anecdote or two, or even no supporting evidence whatsoever.
Most people don't have the epistemological literacy to weight such claims with the feather weighting they deserve.
I want to improve public epistemics but I also don't have the time to read every study I cite. So my compromise is roughly as follows. I will search for studies and read the abstract. I copy/paste the abstract along with a link to the study if it's relevant to the discussion. That doesn't imply any endorsement of the methodology, it just alerts people to a study with X purported finding.
Alternatively, I will simply make a claim with no supporting evidence whatsoever, secure in the knowledge that there is in fact a study which purports my claim (but I don't cite said study, in order to maintain an epistemological safety margin, so featherweighters can featherweight if they like).
Another thing I will do is feed my comment to an AI before publishing and ask it to do a fact check. I usually write things I already have independent reasons for believing they are true, then pass it by the AI as an additional misinformation filter.
I'm not aiming for perfection, I'm just trying to improve public epistemics a bit more than the average internet user. Maybe I should calibrate my approach based on the prevailing epistemic quality of a given space, and try to avoid polluting spaces which adhere to a higher bar than what I just described. I'm not sure there *are* any such spaces online, however.
I suppose we could differentiate meaningfully between UGC like this Substack comments section, and an NYT editorial which is supposed to be held to a higher standard. Ironically I think people are psychologically primed to weight comment sections quite highly, which seems like a pretty serious flaw in society come to think of it.
I appreciate your overall point and the correction you made seems to be important, but I’m not sure how one would write a opinion piece on the NYT without “reasoning without citing studies”. Twenge’s opinions come from years of research and have been documented in many articles she wrote, which tackle questions of causality and so forth. But she can’t be expected to discuss all the literature in nuances in every piece.
She can be expected to honestly and accurately report what they say, at a minimum. I would hold her to the higher standard as well, since she is an expert, in being discerning in what she cites, and only citing good work.
Thanks for this article. I find value in Twenge's work as she is someone who started work on this topic early, which I believe has been impactful in pushing others to ask the same questions. Much of this work is indeed correlational, but that is a reflection of standards in psychology and its different underlying positions, specifically its take on the power vs significance problem (in a broad sense -- if I need more evidence to reject any null, I will be able to reject fewer null hypotheses). I know you've written about this before, but it is not uncommon at all for economists to provide better quality of evidence after others have asked the question, and taken first steps towards answering it, before them.
Jean Twenge also wrote an entire book that, as far as I can tell from online reviews, decides to attribute a bunch of societal shifts to generations, which are basically randomly chosen 20-year-or-so periods. This seems like it obviously has to be bogus pop psychology? People don't suddenly become a new person because the year is different- all these variables should be changing continuously. I haven't actually read the book but now I am tempted to because it so predictably seems bad and maybe it would be fun to rip it apart?
So yeah, based on this evidence I feel pretty confident saying that Twenge has low scholarly standards.
Timely reminder to take ALL studies with a grain of salt. There are always more perspectives that could be relevant but even provided citations are selected to support the opinion being argued.
How about a meta-analysis? Am I allowed to take the abstract at face value? 😉
No, absolutely not. In fact, meta-analyses are worse.
OK, so how are ordinary people supposed to find trustworthy info? Can I go based on surveys of economists? https://kentclarkcenter.org/surveys/
How about you publish your phone number so I can just call you?
You can only do you but a grain of salt maintains open mindedness.
Nicolas, as someone with a hobby of reading the psychological literature I predict if you keep at it you would be writing many letters of concern to editors
Just wait until you find outright fraud xd.
You accidentally linked to Morehead, Dunlosky, and Rawson again in the hyperlink for Urry et al
Thanks, will change
Ofc
Your points are all good, but plenty of users are making claims on social media based on just an anecdote or two, or even no supporting evidence whatsoever.
Most people don't have the epistemological literacy to weight such claims with the feather weighting they deserve.
I want to improve public epistemics but I also don't have the time to read every study I cite. So my compromise is roughly as follows. I will search for studies and read the abstract. I copy/paste the abstract along with a link to the study if it's relevant to the discussion. That doesn't imply any endorsement of the methodology, it just alerts people to a study with X purported finding.
Alternatively, I will simply make a claim with no supporting evidence whatsoever, secure in the knowledge that there is in fact a study which purports my claim (but I don't cite said study, in order to maintain an epistemological safety margin, so featherweighters can featherweight if they like).
Another thing I will do is feed my comment to an AI before publishing and ask it to do a fact check. I usually write things I already have independent reasons for believing they are true, then pass it by the AI as an additional misinformation filter.
I'm not aiming for perfection, I'm just trying to improve public epistemics a bit more than the average internet user. Maybe I should calibrate my approach based on the prevailing epistemic quality of a given space, and try to avoid polluting spaces which adhere to a higher bar than what I just described. I'm not sure there *are* any such spaces online, however.
I suppose we could differentiate meaningfully between UGC like this Substack comments section, and an NYT editorial which is supposed to be held to a higher standard. Ironically I think people are psychologically primed to weight comment sections quite highly, which seems like a pretty serious flaw in society come to think of it.
I appreciate your overall point and the correction you made seems to be important, but I’m not sure how one would write a opinion piece on the NYT without “reasoning without citing studies”. Twenge’s opinions come from years of research and have been documented in many articles she wrote, which tackle questions of causality and so forth. But she can’t be expected to discuss all the literature in nuances in every piece.
She can be expected to honestly and accurately report what they say, at a minimum. I would hold her to the higher standard as well, since she is an expert, in being discerning in what she cites, and only citing good work.
Thanks for this article. I find value in Twenge's work as she is someone who started work on this topic early, which I believe has been impactful in pushing others to ask the same questions. Much of this work is indeed correlational, but that is a reflection of standards in psychology and its different underlying positions, specifically its take on the power vs significance problem (in a broad sense -- if I need more evidence to reject any null, I will be able to reject fewer null hypotheses). I know you've written about this before, but it is not uncommon at all for economists to provide better quality of evidence after others have asked the question, and taken first steps towards answering it, before them.