Pseudoscience

  1. xFamily
  2. » General
  3. » Pseudoscience

Get Email Updates Email this Topic Print this Page

Reply Fri 4 May, 2007 09:01 pm
Pseudoscience
A good read: "Distinguishing Science and Pseudoscience" by Rory Coker, Ph.D.

A few good quotes:
Dr. Coker wrote:

The statement "Science cannot explain" is common in pseudoscience literature.

[Pseudoscientists] argue from irrelevancies: When confronted by inconvenient facts, they simply reply, "Scientists don't know everything!"

Science relies on—and insists on—self-questioning, testing and analytical thinking that make it hard to fool yourself or to avoid facing facts. Pseudoscience, on the other hand, preserves the ancient, natural, irrational, unobjective modes of thought that are hundreds of thousands of years older than science—thought processes that have given rise to superstitions and other fanciful and mistaken ideas about man and nature—from voodoo to racism; from the flat earth to the house-shaped universe with God in the attic, Satan in the cellar and man on the ground floor; from doing rain dances to torturing and brutalizing the mentally ill to drive out the demons that possess them. Pseudoscience encourages people to believe anything they want. It supplies specious "arguments" for fooling yourself into thinking that any and all beliefs are equally valid. Science begins by saying, let's forget about what we believe to be so, and try by investigation to find out what actually is so. These roads don't cross; they lead in completely opposite directions.
 
Anonymous
 
Reply Mon 7 May, 2007 08:59 am
Nice article
I was particularly struck by the following statement:

Quote:
Some confusion on this point is caused by what we might call "crossover." "Science" is not an honorary badge you wear, it's an activity you do. Whenever you cease that activity, you cease being a scientist. A distressing amount of pseudoscience is generated by scientists who are well trained in one field but plunge into another field of which they are ignorant. A physicist who claims to have found a new principle of biology—or a biologist who claims to have found a new principle of physics—is almost invariably doing pseudoscience.


There is a technology called "risk assessment" based on social science theory and statistical modeling where the analysis can predict the likelihood that violence, suicide, or child abuse will occur within a specified population. This is why it's so bogus for a theologian who is not trained in social science, or even a social scientist who is not trained in risk assessment, to make a pronouncement about children growing up in TFI being at no greater risk of abuse than children growing up in the general population.

A closely related science-- epidemiology--can determine whether accidental deaths, suicide or reports of child abuse occur at a rate that far excedes its occurrance in the general population. In addition, statistical science can demonstrate that the high rate of suicide in a population such as TFI is more than a fluke or naturally occurring deviation resulting from sampling error. The data needed to do this according to the most rigorous statistical tests aren't available; however, if you understand the general principles involved, you can get by using the available data to do an estimation procedure. When I've done this, I've gotten into arguments with exers who either didn't fully understand the principles involved or wouldn't accept the validity of my work-arounds. Not acceping the validity of a work-around is legitimate scientific vetting. Questioning the method by which a scientific finding is derived is a major way scientists push for newer, better studies that produce more reliable information.
 
Thorwald 1
 
Reply Mon 7 May, 2007 06:01 pm
We have been wanting to write an article on the suicides of current and former members, using available data and statistics. We started one a while back, but never took it further. I would be interested in learning more about the principles you described and even ask for your help in writing this article. Would you be interested?
 
Anonymous
 
Reply Tue 8 May, 2007 06:42 am
I've been interested in doing this article for a long time now. The "principles" to which I referred are actually called assumptions. One of the biggest arguments I've gotten into with exers about the stats I've done on special population suicide rates deals with assumptions about distribution and variability. Let's say there have been 20 suicides among former members in the last 5 years. The actual distribution of those deaths may look like this between 2001--2006: 2,0,4,8,6. That's really quite a lot of variability. You can say that for the five year period, the suidice rate averaged 4 per year. However, you can't say there were 2 suicides per year in 2002, 2004 or 2005. Thing is, we only have total suicide numbers over a given time period to work with--we can't talk about how many suicides occurred in a given year to begin with. However, if we specify a yearly average over a specified period of time, that suffices as an special population estimate for the purpose of comparison to the general population rate.

A second problem arises when you compare this average of 4 suicides per year over five years to a general population estimate. First, you have to standardize the special population estimate to a rate of N per 100,000 deaths, because that's how general population rates are calculated. At the beginning of 2001, we begin tracking deaths in a special populatin fo 10,000 people. At an average rate of four per year over 5 years, that's a yearly estimate of 40 per 100,000.

The variability in the suicide rate of the general population is extremely low from year to year. Between 2001 and 2006, the N per 100,000 rate looks something like this: 11.2, 11.8,11.0,10.9,10.6. It's because this comparison rate is so stable (lacks much variability) that we can get away with averaging the special population rate to a yearly rate of 40 per 100,000. I've not been terrifically successful in convincing some people at the exer sites that it's kosher to use a yearly average for purposes of comparison. I think it's because they don't understand the relationship of variability to an overall distribution pattern.
 
Thorwald 1
 
Reply Tue 8 May, 2007 05:19 pm
BE wrote:
The variability in the suicide rate of the general population is extremely low from year to year.


I would assume that is because the general population of a given society (e.g, the US) is orders of magnitude greater than TFI ever was. If, for an example, we assume that TFI had 30,000 at its greatest (and it never was that many at any given point) and the population of the US is 300,000,000, we have four orders of magnitude difference (3x10^4 vs. 3x10^8 ).

Statistical probabilities, the potential for prediction, and a decrease in variability almost always follow from an increase in data and distribution.
 
BlackELk
 
Reply Tue 8 May, 2007 06:31 pm
Quote:
Statistical probabilities, the potential for prediction, and a decrease in variability almost always follow from an increase in data and distribution.


Yup, which is why you have to standardize the magnitude of a special population in order to compare it to the general population.

There is still a methodological problem with comparing a special population sample to the general population, but TFI decided to do this in their public statements. What they failed to do was standardize their estimated rate before comparing to the general population. If you standardize their rate using the bogus numbers they provide, you'll see that the admitted suicide rate for TFI is significantly higher than that of the general population.

The methodological problem stems from the fact that the suicide rate for the general population of a country (or the world) presumedly includes the suicide rate of a special population sample such as TFI. It's like asking, "Is this sample of an apple significantly different than all the other apples on the tree combined with this apple?" A more valid comparison would involve two special population samples, such as the suicide rate for Mormons versus that of TFI.
 
Thorwald 1
 
Reply Wed 9 May, 2007 11:39 am
BE wrote:
A more valid comparison would involve two special population samples, such as the suicide rate for Mormons versus that of TFI.


I wonder if TFI has its own Jello Belt?

I should think a better comparison would be against the Romani people (aka Gypsies). They are scattered across the globe, purposely keep themselves segregated from society, continually on the move (or nomadic), etc. Of course, there are around 10 million of them; several orders of magnitude greater than the population of TFI and the have been around much longer.
 
Anonymous
 
Reply Wed 9 May, 2007 06:49 pm
Jello belt! That's a bit of cultural trivia I haven't heard before. I have a Mormon co-worker on my research team--I'll have to ask her about that, particularly the business about lime jello being "the most stereotypically Mormon of all the flavors."

Differences in population size/magnitude are handled by standarizing the percentage of an occurrance on a 100,000 rate. The problem with finding suicide rates for a comparable special population sample (like the Romani) is that the data generally are not reported that way unless it's a special study of a subpopulation sample.

Did you see this article in Wikipedia?

http://en.wikipedia.org/wiki/Epidemiology_and_methodology_of_suicide

The wiki article provides a good overview on the epidemiology of suicide. It doesn't really talk that much about the methodology for estimating incidence. When you look at the sections on Social Factors, Health and High Risk Groups, you can see why TFI's rate is so much higher than that of the general population in the US or UK. You could make an argument for Hungarian statistics as a proxy for Romani rates, though. The interesting thing about Hungary is that they also have one of the highest rates for cirrosis of the liver, a disease linked to alcoholism. BTW, there are studies using a method called psychological autopsy that indicate that the person had a diagnosable mental illness or substance abuse disorder in 90% of completed suicides.

When I googled "Mormon suicide rate" I got what I expected to see: A study demonstrating an extremely low suicide rate in a subpopulation sample of committed, practicing Mormons. (It helps that practicing Mormons don't use alcohol.) When I googled "Utah suicide rate," however, I discovered that in 2003, Utah had the 10th highest suicide rate of any state in the US. Approximately 72% of the Utah population is identified as Morman. It's interesting that someone funded a faculty at Brigham Young University to do the special population study, which is followed by an article about Utah's high suicide rate having nothing to do with stresses in the LDS church. I think it's safe to assume that "backslidden" Mormons are contributing to Utah's unusually high rate.

http://www.adherents.com/largecom/lds_LowSuicideRate.html
 
winter 1
 
Reply Fri 11 May, 2007 10:06 am
BlackELk wrote:

There is still a methodological problem with comparing a special population sample to the general population, but TFI decided to do this in their public statements.


They can count and stuff? Really? Amazing. I was always taught that was evil and it's better to be retarded and fearful and think but rest in the "word."
 
BlackELk
 
Reply Wed 23 May, 2007 07:54 am
Cargo Cult Science
Yeah, TFI can count, but they aren't very honest, so it affects their results.

Now that we're in the season of college commencement speeches here in the US, I thought it might be appropriate to nominate this address given by the physicist Richard Feynman to CalTech in 1974 as one of the best ever on the subject of science and self-delusion. It's called "Cargo Cult Science," and you can read more of Feynman's entertaining commentaries in a book called, "Surely You're Joking, Mr. Feynman." Here is an exerpt:

"In the South Seas there is a cargo cult of people. During the war they saw airplanes with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head to headphones and bars of bamboo sticking out like antennas -- he's the controller -- and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

Now it behooves me, of course, to tell you what they're missing. But it would be just about as difficult to explain to the South Sea islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school -- we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty -- a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid -- not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked -- to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can -- if you know anything at all wrong, or possibly wrong -- to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another. "

http://wwwcdf.pd.infn.it/~loreti/science.html
 
winter 1
 
Reply Thu 24 May, 2007 08:01 pm
Re: Cargo Cult Science
"That is the idea that we all hope you have learned in studying science in school -- we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation."

I sure hope that they do teach that in school. From what I remember, they do teach that in school. Though I have no idea what schools this guy is talking about.
 
evanman
 
Reply Fri 8 Jun, 2007 10:49 am
I would suspect that the numbers of ex-members who attempt suicide may be a better indicator of the damage that CoG/TF has on people than those who actually succeed. I have failed twice to kill myself--not planning on trying ever again!
 
 

 
  1. xFamily
  2. » General
  3. » Pseudoscience
Copyright © 2024 MadLab, LLC :: Terms of Service :: Privacy Policy :: Page generated in 0.03 seconds on 04/26/2024 at 12:23:29