1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. Hi Guest, welcome to the TES Community!

    Connect with like-minded education professionals and have your say on the issues that matter to you.

    Don't forget to look at the how to guide.

    Dismiss Notice

good mathematical reasons to dam director of studies' new "statistics" please

Discussion in 'Mathematics' started by beedge, Nov 8, 2011.

  1. beedge

    beedge New commenter

    I work at a fairly small school, which is looking to publish their IB results by subject.

    We were sent a summary of results today by the career-driven, newly appointed, director of studies. 23 subjects (English A1, A2, Maths studies etc) along with a "points average" per subject.

    Incredibly the "averages" are to 2 decimal places. For those that aren't familiar with IB scores, each score has already been rounded to the nearest whole number. This is the first thing I?m going to point out. Possibly with an analogy with money:

    Jonny gets £3.62 a week pocket money. Let?s call it 4.
    Timmy gets £2.85 a week. Let?s call it 3.
    Eddy gets $6.78 a week. Let?s call it 7.
    Fred gets £1.45 a week. Let?s call it 1

    Does this mean the average is (4 + 3 + 7 + 1)/4 = £3.75 a week???

    Apparently it does.

    Interestingly the ?average? Maths Studies score is a 2.00 One student did it!

    What I would like to start quoting, is something to do with confidence intervals ? something I know nothing about. For all these results, we?re often talking about class sizes of 5 or 6.

    That?s 1 idea. But I?m sure fellow TESers have got some far better ones.

    Thanks in advance ? and please feel free to be as rude as you want. And by the way, why do I lose my line spacing why I post this?
  2. It's amazing how many people in 'can do' management positions have absolutely no clue about statistics, including people for whom results analysis is part of their job spec.

    If you are typing on a Mac/iPhone/iPad then you can use basic HTML to format your message. Use angle brackets around the br tag to get a line break.
  3. Karvol

    Karvol Occasional commenter

    Being rude is not a virtue.
    Back to your post. I thinking you are concentrating on the wrong thing here. It is standard practice to look at exam results and see how they can be improved, although this analysis should have been done a while ago. Better late than never, I guess.
    Two decimal places, one decimal place, does it really make that much of a difference? The real issue is how did the students do? Did you have some testing done on the students to see what sort of results they should be getting? My school does ALIS testing which may be quite a blunt instrument, but does give some idea of what sort of ball park figure we should be expecting.
    The IB scores are worked out in a particular way. When the Grade Award meeting takes place, grade boundaries are decided for the 7 and for the 4. The rest of the grades are then spread evenly around them. To say that the IB score is a rounded average is misleading. If you achieve a certain mark in an exam, you are within a certain boundary. A 6.95 is no different from a 6.01 when it comes to the IB grade awarded - it will be a 6.
    What would be more interesting to look at is the individual score per paper and also the internal assessment moderation scores. You also need to compare the predicted grades and the actual grades. Do these tie in with what the students have achieved in the IB years? If the grades are lower than what was expected, you need to look at why this is happening. Grade inflation, incorrect understanding of criteria, students entering the US universities relaxing once their acceptances are in, all of these are of relevance.
    To be blunt, a 2 in maths studies is pretty poor and could become a failing condition. How were your results in SL and HL?
    Also, how long has your school been an IB school?
  4. Very few people have a proper understanding of statistics. And this is reinforced by the ridiculous nature that the media use such figure. A recent program on BBC1 (?) about the Welfare State used a survey of 1009 people as being representative of the UK population. Hmmm, in other words they asked about 1 in every 500,000 people in England their thoughts. Very representative...
    The very fact that 8 out of 10 cats (cat-owners) said, "Their cats preferred..." really? Did they ask the cats? How precisely?
    And don't get me started on percentages. Pet peeve - the next time a footballer says, "I gave it 110%." Really? Then you didn't actually give it 100% before because although we can have percentages above 100% (interest calculations, etc) it is impossible for a body to give 110%. Lack of understanding, lack of proper use...Which came first? The fact that people in the media stopped asking the proper questions or the fact that the syllabus didn't allow us proper time to questin such statements?
  5. Nazard

    Nazard New commenter

    I can't comment on the particular survey you are referring to, but opinion polls (as carried out by Gallup, YouGov, etc) generally have a sample size of about 1000. They claim to be +/- 3%.

    I don't know how to justify that level of accuracy, though, and it appears to be absolutely crucial that a sensible sample is chosen!

    A short story by Isaac Asimov called "The Franchise" takes this sort of sampling to a logical (and rather wonderful) conclusion.
  6. As a proper Mathematics student, such a level of accuracy is highly debatable. In a GCSE class I would ask them whether they thought such a survey was accurate. I know they state a +/- 3% rule, but as few people actually know what they are considering, can we take such a number for granted? For instance, we never know how the survey was collected.
    e.g.Have you ever stopped to answer a questionnaire in a busy town/city centre?
  7. pencho

    pencho New commenter

    Brambo, I'd think you would be surprised how accurate statistics can be, given a decent representative sample. You might think this is a low figure, but it is quite accurate. BARB survey approx 5000 to generate viewing figures of 26milliion. It might be quite ironic that you say "Very few people have a proper understanding of statistics".
  8. Nazard

    Nazard New commenter

    I agree entirely that the way this is carried out is absolutely critical.
    If you carry out a convenience sample you will get dodgy results, if you post your questionnaire on Facebook you will get dodgy results and if you carry out your survey by shouting your questions at people as they get on an escalator at Euston station you will get dodgy results, but none of this has anything to do with the sample size.

    A sample size of 1000 has an error of +/-3% (with a 95% confidence). So 1009 people _can_ be representative of the population of the UK. If they aren't then it isn't the fault of the size of the sample, but the either the way the questions were written, the way the questions were asked, the exact individuals who were chosen to be part of the 1009 or the way the data was interpreted after it was collected.
  9. a sample of 1000-1500 is about right for the uk electorate. I can't remeber the details but it is something to do with the way the confidence increases as the sample size grows (i think confidence grows with the square of the sample size?).
    however, it should be noted that the polling companies use a stratified sample with lots of categories to make sure it is representative and can get it spectacularly wrong. For example in the 1992 general election, the pollsters include a fudge factor to counter respondants being unwilling to admit to voting Tory (as had geen found to be the case in '87 but their stratification failed to take account of the vast socio-economic changes between 1981 and 1992 (their stratification was based on the '81 census).
  10. They claim a +/- 3% confidence interval based upon the sample size ALONG WITH A 95% CONFIDENCE LEVEL.
    In other words, they are 95% confident that their answer is within a TOTAL of 6%, e.g. if they show a response of 70%, the +/- 3% gives them a 95% confidence that the population would respond between 67% and 73%.
    Now, actually a survey of 1009 is just under most calculations to give exactly +/- 3% at a 95% confidence level.
    More importantly though is when and where a survey is carried out...and whilst the survey size does work out for a general election, they are unreliable. In fact extremely unreliable...
    My point is that they claim a +/-3% rate without mention of the 95% confidence level, nor mention of where such a survey was carried out. And whilst these people are professionals, I'm suggesting that the market researchers they have on the street are often not.
    I suppose my general point is that I see it as a misuse of statistics because most of the population have little understanding of confidence intervals etc.

Share This Page