all 11 comments

[–]Good_As_You 10 insightful - 1 fun10 insightful - 0 fun11 insightful - 1 fun -  (4 children)

I have just started to look into the original paper and... WOW! The quality of the data is appalling, to say the least: missing values everywhere, information like sex or dosage not reported, physical measures' scores wildly changing back and forth probably because the N=44 shrunk to N=24 and N=14, etc.

We're doomed if this is what they refer to when talking about "settled science" that we need to trust.

I will read it all and post a reply with more details on the flaws I'm able to identify, but it's baffling that this got peer-reviewed and published, even if PLOS One isn't the greatest of journals.

[–]Q-Continuum-kin 7 insightful - 1 fun7 insightful - 0 fun8 insightful - 1 fun -  (0 children)

The original study was published by Tavistock which should have been a dead giveaway from the beginning that it might be suspicious. Also if anyone says something about settled science then quotes WPATH as a source I take that with even less seriousness. They might have some accurate information but they are purely an activist organization and so everything that comes from that type of place is going to be spinning a narrative.

[–]WanderingWonderWizard Extraordinaire 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (0 children)

I look forward to seeing your reply

[–]reluctant_commenter[S] 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (1 child)

Yikes, but I'm not surprised. I haven't looked at the actual dataset yet. I can't decide whether to laugh or be pissed off. This is reflective of how little trans "rights" activists care about the health and well-being of GNC people, LGB people, and "trans" people, who they claim to want to protect...

We're doomed if this is what they refer to when talking about "settled science" that we need to trust.

Agreed, and it's worth mentioning, very, very few phenomena are agreed upon as "settled science," lol. 9/10 times when I hear this phrase I am immediately suspicious. It's a phrase designed to deter further investigation and sometimes people who are invested in a particular possibility say it even about findings that are not politically charged but that they are personally invested in.

but it's baffling that this got peer-reviewed and published

The BBC article says it hasn't been peer reviewed yet, actually. But I haven't looked st the study itself yet so I'll go ahead and do that too, hopefully later today. Thanks for looking :)

[–]Good_As_You 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

It's the new re-analysis that hasn't been peer-reviewed, the original 2021 study has been. You can even read the comments by the anonymous reviewers: https://journals.plos.org/plosone/article/peerReview?id=10.1371/journal.pone.0243894

[–]Chocolatepudding 6 insightful - 1 fun6 insightful - 0 fun7 insightful - 1 fun -  (1 child)

So just as likely to experience a negative effect as a positive one; while majority saw no change.... please someone correct me if I'm wrong. Very small sample, not peer reviewed but absolutely not the message usually given either

[–]reluctant_commenter[S] 8 insightful - 1 fun8 insightful - 0 fun9 insightful - 1 fun -  (0 children)

while majority saw no change....

Plurality (37%), not majority; but otherwise, yes, that all sounds about right. edit: That is to say, 71% of the participants saw their mental health either not change or get worsen, and only 29% experienced the desired outcome.... not exactly what one would hope for if the entire point of puberty blockers is supposedly to "prevent suicide" and "improve mental health."

I'm curious to see what future studies will say. Trans rights activists have been fighting tooth and nail to stop studies on the side effects of puberty blockers, HRT, and the like, so it's not surprising that the few studies on this topic are small ones. But with additional recent attention to the topic, hopefully there will be more data soon.

[–]PriestTheyCalledHimBisexual 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 1 fun -  (0 children)

This should not surprise anyone. A bi friend and I had to drop a trans friend, MTF who we knew before she transitioned as she started hormones both the testosterone blocking ones, and estrogen, AS AN ADULT, and became extremely mean, vindictive, jaded, and just so extremely negative that neither of us wanted to be around her or talk to her.

[–]INeedSomeTimeAsexual Ally 4 insightful - 2 fun4 insightful - 1 fun5 insightful - 2 fun -  (1 child)

The most bizarre thing about puberty blockers is instead of testing it properly until using it without giving it too much thought, they just rolled with it and if anyone had objections you were called a hateful transphobe.

Honestly, I was thinking about this shit a while ago and concluded that experimenting on people, who are considered to be trans, is hateful and transphobic itself! Regardless if these people are actually trans but somehow it's super acceptable to experiment on mentally vulnerable people. It's so sick.

[–]reluctant_commenter[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Honestly, I was thinking about this shit a while ago and concluded that experimenting on people, who are considered to be trans, is hateful and transphobic itself!

Yup. One of the #1 rules of human subjects research is that the participants need to be fully informed about the risks of participating. That principle was completely, flagrantly disregarded in the case of puberty blockers. I know there is a lot of talk about how children can't consent to puberty blocker experimentation because they're minors, which is true, but more generally, people can't consent to puberty blocker studies regardless of their age because these studies-- the vast, vast majority of them, if not all-- do not inform participants about the risks of the drugs they're taking. I mean, we've known for years that Lupron chemically castrates adults... there's no excuse to not inform participants of that. But so many detrans people said they were never informed, and so many trans people either never heard of it or deny it-- and I'm sure it's tempting to deny because it's horrifying to imagine that that could have really happened to you!

Regardless if these people are actually trans but somehow it's super acceptable to experiment on mentally vulnerable people. It's so sick.

And yeah that's all the more reason. Shit's heartbreaking.

[–]Good_As_You 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

This is going to be a long write-up, but here’s some of the flaws I found in the study.
It's always important to be meticulous when reading scientific studies, since people use them to jump to conclusions and we can't blindly trust what the authors are saying given that their data might be faulty to begin with.

Methodological flaws

  1. No control group was used, we can’t know if people with GID would have had similar results had they not taken hormone blockers (HBs).
  2. The consent form given to children informed them that the study was trying to find out if the treatment would make them feel better about themselves, and told them one of the benefits of HBs is the “improvement on the way you feel about yourself.”
    This is, in my view, subconsciously influencing the kids to report positive results and a very bad research practice. You should always go into it without loaded assumptions, especially if you’re trying to determine the effects themselves.
  3. They mention “time off treatment” in the data dictionary, which implies some kids weren’t always taking the HBs throughout the study. The data regarding that is removed for privacy issues but would be really important if you’re studying the impact of HBs and some individuals just stop using them.

Data collection

  1. There might be many confounding factors on the psychological wellbeing of the participants that aren’t accounted for. The patients aren’t just taking HBs, they’re also getting therapy and psychosocial support, which should be causing a positive effect regardless.
  2. The sample size is N=44/24/14/4 for t=1/2/3/4 years, respectively. Or so they claim, because only 43/20/11/0* have CBCT data available.
    1. Participants are excluded from the study when they turn 16. We can’t know how the results would’ve been if they kept monitoring them, but the focus of the paper is only on younger people’s puberty blocking and not on HRT.
    2. Changing the amount of participants influences the z-scores (relationship to the mean instead of absolute values), making the analysis of year-after-year trends almost meaningless.
    3. The data for the 4th year is not given because it might be personally identifiable and there’s too few participants.
  3. Some of the data are objective physical attributes, but the bulk of it is based on self-reported questionnaires from both the child and parents. I guess many psychology studies are based on that type of subjective evaluations since it’s hard (or impossible) to gather concrete proof, but it’s very unreliable and shouldn’t be the base of any paper (much less of life-altering medical decisions!) in my opinion.
    Wouldn’t participants feel pressured to state that they’re happy with the treatment, knowing that they’re part of a study that might impact its availability in the future?
  4. Asking the children themselves is already subjective, the parents' accounts might be even more skewed. I haven't seen any indication of whether the same parent filled the questionnaires every time or if both had to agree.
  5. There’s tons of values missing in the dataset:

    1. Some of the 10 dimensions are missing in both parent and kid surveys, which is strange because they’re derived from a questionnaire with 52 items so it’s not simply caused by forgetting to answer one of those questions.
    2. One set of parents never filled any of the questionnaires
    3. Physical measurements weren’t taken in some yearly check-ups
    4. One kid didn’t even get their hip bone mass density recorded when starting the study, which is needed as a baseline for the subsequent years.
    5. No data on Youth Self-Report and questionnaires at the 4 year mark.

    It’s a total mess and any scientist with some semblance of rigour ought to be embarrassed to present findings using this data. Again, I don’t know if this is usual in the field of psychology but at the very least they should drop participants with missing information for the analysis, but that isn’t possible since they are using such a dismal sample size.

    The authors set “failure to attend for tests and scans” as an exclusion criterion, which clearly they haven’t followed since they’re including people with missing tests and scans in the dataset and their analysis. They even acknowledge that they “made no attempt to account for missing data” because the small sample size.

Representativeness of the sample compared to the general population

  1. The distribution of ages were: 20/44 (45%) 15 years old, 10/44 (23%) 14 y.o., 10/44 (23%) 13 y.o. and 4/44 (9%) 12 y.o.
    Is it correct to treat them as part of the same population for the study? Are 15-year-olds (the largest group in the study) that have already undergone a significant fraction of puberty going to sustain comparable changes (potentially beneficial and harmful) to 12-year-olds?
  2. All participants must have been interviewed multiple times by the Gender Identity Development Service and must have been deemed psychologically stable enough to endure the stress induced by the treatment. Anyone with bipolar disorder, anorexia or any other serious psychiatric condition is automatically excluded.
    This is not inherently wrong, but it must be clear that these results aren’t applicable to everyone that might want to take HBs, only to those with verified GID and mental stability. Rest assured, people will use results from this and similar studies to talk about the safety of HBs for people with a list of mental disorders on their Twitter bio.
  3. In a similar fashion, only children with parents that supported them taking HBs were eligible for the study. I assume the consequences would be even worse without a supportive or encouraging family.
    Given that the NHS already states that children under 16 can consent to treatment without their parents’ knowledge if they’re believed to have enough intelligence and understanding, and even courts can overrule parents’ refusal if they believe it’s in the best interests of the child, there’s already grounds for children to use HBs in difficult family situations.
  4. One should also consider: how significant are the changes caused by HBs within 1, 2 and 3 years of monthly 3.75 mg of triptorelin? I have no idea what that entails, but it seems to match the dose prescribed for children with precocious puberty. I assume that would only slow down the effects of puberty and a higher dose would be required for those wanting to block pubertal changes related to their sex.

    Could the negative results be partially explained because they still went through a somewhat normal puberty for their sex since the dose was low enough, as opposed to what they told them?

Reported data

  1. Neither the individual participants’ sex nor initial pubertal stage is indicated, even though it was recorded by the researches.
    I’m no doctor or biologist but I would assume those factors to be highly important when analysing the effects of pubertal suppression, more-so considering that the females in the study were older than the male counterparts who on average go through puberty later.
  2. The paper discusses some more questionnaires but doesn’t report them in the dataset: Self-Harm Index, Body Image Scale, Satisfaction with GnRHa, etc.
  3. There’s a bunch of other indicators studied but not reported on the public dataset, like testosterone/oestrogen levels, blood pressure, gonadotropins, bone mineral content, etc.
    It is generally advisable to perform the analysis on all of the collected indicators and report it, otherwise it could be indicative of concealment of unwanted results or cherry-picking only the best outcomes.
  4. As mentioned beforehand, only z-scores and t-scores are included in the public dataset, which measure how far off the mean is a given value. Because of this, we can’t tell if the patients’ mental and physical health improved or worsened, only how they changed relative to the group.

    Even if all subjects got better in every metric, around half of them must be below any given year’s mean (assuming normal distribution) and would get negative z-scores.

    I honestly don’t understand why we’re even given this dataset, please someone with more statistical knowledge enlighten me because I don’t see how to interpret it in any other useful way. Could it be that the z-scores where calculated against the general population of the same age? I don’t see that being the case since there’s multiple scores of, say, height that are identical from 1 to 2 years into the study with a precision of 13 decimal places.

    I don’t think the z-scores are calculated in relation to the fixed baselines either since some individuals go down in their height score, which would imply they’re calculating the new z-score within a different population.

Analysis of results

I won’t criticise the analysis of the original results, given that the new paper already does so and I’m not part of the field. Even with the updated re-analysis I wouldn’t feel comfortable deriving any conclusions from data with such a high variance, small sample size, uncontrolled variables, lack of transparency, short evaluation period and subjectivity.