# Residuals vs other subjects

Discussion in 'Mathematics' started by njom4, Sep 6, 2018.

1. ### njom4New commenter

I'm sure lots of us are involved in paperwork analyzing students performance and justifying results.

I have been asked to compare our students' maths grades to their average grade across all other subjects.

Does anyone know of any analysis of subjects along these lines? I'm sure students generally find maths harder than most other subjects.
Is this a fair comparison to make?

2. ### gainlyStar commenter

It is not fair. When I'm advising colleagues on data analysis, the two things I tell to avoid (or not worry about) are residuals and Progress 8. Firstly, sample sizes can often be too small to make it worthwhile. Secondly, it's a zero-sum game, so if every subject gets excellent improvement, but one subject does slightly better than all the rest, the others will have negative residuals. Thirdly, you can't compare eg Maths and Art. One is a compulsory subject, the other is an option subject; one is 100% exams, the other isn't. So don't let your leadership draw comparisons that are statistically invalid and show you unfairly in a negative light.

bombaysapphire and strawbs like this.

Doing this would simply confirm what most people already suspect to be true. Namely, that any given student will perform to different levels in different subjects. It would provide no insight whatsoever to imply anything beyond that.

It could show that students' performance was better in certain other subjects. But how well they perform in ANY given subject is based on a combination of factors, some of which the teacher has no control over whatsoever.

Just to prove what a misunderstood thing this is, we had one headteacher who told us that all subjects must score above the average for the school.

cac2008, afterdark and sbkrobson like this.
6. ### pi r squaredOccasional commenter

On what grounds are you basing this comment? The grade distributions (% 9-4 etc.) for maths nationally are broadly in line with those of most other subjects with a large enough entry profile, so if people "in general" do find maths harder then the papers and grade boundaries are adjusted accordingly so as not to penalise this. There is no reason therefore that a cohort of maths results would automatically be lower than those of other subjects.

Comparing a student's subject grade to their own average grade rarely yields anything of much use, unless there are real problems. It is sometimes handy as an argument when certain children have underperformed - it is hard to argue that it is the maths teacher's fault that Johnny only got a Grade 2 in maths if he also averaged Grade 2 across nine other subjects as well. If an entire class's results turn out that the maths grade is lower than average in every single case, this may ring alarm bells, although nothing is deducible from data without context. Otherwise, unless the department really is underperforming, all that comparing every student's maths grade against their average will show you is that 50% did better than their overall average grade and 50% did worse.

7. ### njom4New commenter

Never heard of ALIS by Durham University then?

9. ### jcstevNew commenter

Is this whole thread based on confusing two qualifications?

ALIS looks at A-levels (and does show that students with the same average GCSE score tend to get lower grades in Maths than in some other subjects).

At GCSE, and underlying difference in difficulty (if it exists) is corrected for by grade boundaries.

It is not a fair comparison.
Ignore pronouncements from the naysayers persist in clinging to the debunked myth that all subjects are equally easy.

Have look at this document.

http://www.score-education.org/media/3194/relativedifficulty.pdf

If anyone bangs on about grade boundaries being fiddled to compensate this is done over thousands of candidates. The link above gives some evidence to indicate how this is gross oversimplification of the situation.

Here is an extract.

11. ### pi r squaredOccasional commenter

Debunked by whom and where? The linked article you gave shows that, actually, there is very little in the way of difference in "relative difficulty" amongst the main subjects that have large cohorts, and the extreme ends of the scale are likely caused by entry bias rather than any tangible differences in difficulty. Do we really believe that GCSE Statistics is that much harder than GCSE Maths, for example? Of course not, but probably skewed by the fact that some schools used to force their most able GCSE Maths students to also do GCSE Stats, when oftentimes they couldn't give two hoots about it. It would be equally ludicrous to suggest that single-qualification Science is not only harder than the double-award, but also harder than the three individual Sciences too! Again, entry bias and a huge, incomparable difference between the students entered for single Science and those entered for triple.

Either way, if grade distributions are broadly comparable between (most) subjects nationally, you would anticipate that a sample of the size of most Y11 cohorts will have broadly comparable distributions between (most) subjects within the school. Therefore in most cases, maths results will be broadly in line with students' average grades (win some, lose some); if they are markedly higher or lower, then this is unexpected enough to warrant further investigation.

I don't think it that hard to read the summary...

Which bit about the SUBSTANTIAL DIFFERENCES being stable over time do you not understand?

Your attempts to downplay the evidence do not constitute a statistical method.

13. ### pi r squaredOccasional commenter

It wasn't hard, no. In fact, it was so easy, I went on to read the entire article - as should have been reasonably obvious from my post.

Capitals and underlining, in case I missed it. Thank you. The bit I do not understand is the actual data presented in the report that shows that, in fact, there aren't "substantial" differences between most of the main GCSE subjects, unless a fifth to a quarter of a grade (with no provided confidence intervals for comparison) here and there counts as "substantial" in your world. There is a "substantial" difference between the subjects with the highest-perceived difficulty and lowest-perceived difficulty, but it only takes a cursory glance at the names of those subjects to deduce that there is an inherent entry bias in those cases, as I discussed in my last post. Even the report itself offers four differing possible conclusions from the differences in the data, and goes on to offer six different reasons why the data may imply something other than relative difficulties - again, a fact that should be reasonably obvious based on the fact that otherwise, General Studies is apparently the hardest possible A-Level and "IT Short" the hardest possible GCSE!

Finally, the report is ten years old. A-Levels went through a minor reform in 2008 and a major one over the last few years so even if "the evidence" held in 2006, that does not necessarily suggest the same is true now. GCSEs are practically unrecognisable from 2006, especially in maths which I think has had tier reductions, modular removal, and a syllabus reform in that time. I would suggest the more recent and more relevant wealth of information here is useful: https://www.gov.uk/government/publications/inter-subject-comparability-2015-to-2016

But all of this is a distractor and not relevant to the original discussion. "People find maths harder than other subjects", whether it is true or not, does not imply "People do worse in maths GCSE than in other subjects" - and the evidence that students in general do just as well in maths as in other subjects is right there in post #2. All other things being equal, a school's maths results will be broadly in line with the average grade of each student's other subjects.