Connect with like-minded education professionals and have your say on the issues that matter to you.
Don't forget to look at the how to guide.
Discussion in 'Education news' started by MacGuyver, Aug 4, 2020.
IB parents have already started legal proceedings against the IB organisation.
It's typical and infuriating. Ask teachers but ignore them.
It seems in all cases (IB, SQA, A levels and GCSEs) overall results are up on previous years. If there are apparently so many people who have got lower grades than they deserved, there must be even more who got better than they deserved. I don't suppose they will be complaining.
...and as we go further down the rabbit hole
A snippet for those who are taxed for time
Does it mean that we don’t have to provide the teacher assessments/rank order information to students this year?
The exam script exemption does not exempt you from providing the teacher assessments and/or rank orders to students. However, if you receive a request for this information before the official results are announced, you will have a longer time to respond to this request.
The issue comes down to every student expecting to hit their best possible prediction of a grade whilst in reality, we know that significant proportion would, for whatever reason, underperform in the final exams.
where this year is difficult is that the teachers are making the judgments which lead to these grades meaning that the students will blame teachers for their grades rather than underperformance over the last couple of years.
For example, one of my students was desperate to get a UCAS predicted grade of a B. He had not achieved above a C grade in any assessment in Yr12 or the beginning of Yr 13. There was no further improvement during the rest of the course up to the beginning of lockdown. I am sure that he is expecting us to have given him a B grade and will want us to appeal whatever grade he does get in the end (we did not put him down for a B). In the past he may have been able to pull it out in the final exam but as teachers we can not predict who will make such improvements (just like we can’t predict the students who end up tanking). As this student has not taken any responsibility for his performance up to now, I can’t see him suddenly taking lower than desired results without blaming everyone apart from himself.
This looks as if it's turning into a dreadful mess with grades that nobody believes and all sorts of working relationships soured.
I'm sure, just by the way the results are calculated, that some got better than they deserved. That doesn't follow from the results going up. That could simply be the removal of the element of chance. We all know students who did pretty well all year and then, for whatever reason, bombed on the final exam. I'm not sure the exam always gives students the result they "deserve" except in the narrow sense that the result reflects how well they did in the exam.
I think that part of the issue may also be that where students are borderline between two grades, teachers this year have estimated up the way, rather than down. Essentially, they've given a larger proportion of them the benefit of the doubt, rather than trying to second guess who might have bombed the exam. In a normal year, that wouldn't matter so much, but this year, of course, it skews the estimates upwards to a huge extent.
We had 5 this year in our cohort who were capable of hitting the top band for a particular level, and we had evidence to suggest they could. So we estimated accordingly. Would they all have got that grade if they sat the exam? Probably not - it's likely at least one of them would have had a bad day. Should we have tried to decide which 3 or 4 of them would have got it? Or should we have done what we did, and give them all the same estimate without attempting to second guess what exam performance could have been?
But presumably you had to put them in rank order, even if you gave them all the same estimated grade. It seems the rank is the important thing, as the estimated grades are largely being ignored, so effectively you have decided.
Genuine question (from someone who only works on the assessment side these days).
You were asked to rank order candidates, I know. Were you furthermore asked for assessment of grading based on current evidence (as opposed to a forecast grade that you would do or would have already done prior to terminal assessment)?
I'm asking because some teachers seem to think that they're being asked to predict rather than "simply" assess the evidence they already have.
We were asked for two things - a predicted grade and the rank within that grade e.g. Johnny is a grade 6 rank 5 (fifth highest grade 6).
The predicted grade should be based on the student's performance over the course and be a realistic view of what that student would achieve.
"Predicted", then, would be pretty much where they are at the time of being asked, wouldn't it? (Only a few more weeks for them to make progress?) I suppose if a candidate had just had a prolonged absence and was now back/well, you could expect more to come. Otherwise, they'd be in the same grade, you'd expect.
Ranking in the grade - I'd imagine you'd only put someone as bottom rank if they were really borderline and you wouldn't be distraught if the AB verdict put them down to the next grade?
I must admit I was shocked when I saw that 25% of all grades were changed.
Then I looked at the data and results would have jumped by a crazy percentage, it was something like 15% if I recall correctly? So clearly grades did need to be altered to be more realistic. But not by post-code lottery, penalising good students for where they live was never going to end well.
Scotland however is in a bit of a pickle here - it doesn't collect as much data centrally as England does and therefore doesn't have much data to put into its models. So they're not going to do well when applying population sized data sets to a cohort of 30 students, it's mathematical nonsense.
This is why I'm hoping, against all odds I admit, that England will not be in such a mess. We collect much more data centrally giving us more data to put into the models. I expect to be proven thoroughly wrong this Thursday, I'm young enough to hope that all this data I've entered will have a point.
Nicola Sturgeon has apologised and said the government will announce a plan to fix things tomorrow. It seems a bit of a mess. I expect lots of grades will go up but they can't really reduce any grades, so there will be a huge overall increase in results from last year.
Having GCSE data for the A level students will help in working out the cohort strength and therefore if the grade distribution for that cohort should be better or worse than previous years (with KS2 data having a similar use for GCSEs).
there will still be complaints about the results and the media will not have to look far to find students who have lost out in the process.
Whilst not being completely up on the Scottish system, I can’t comment on whether the way the grades were generated was fair but I think the method used in England was about as good as could have been proposed. The generosity of teachers was always going to be an issue - I am interested in how England (and ultimately Scotland as well) reacts to the inevitable outcry
I forgot to add that in England, the availability of an exam in the autumn will help with any particularly egregious errors - something which is not possible in Scotland
I thought most secondary teachers regarded KS2 results with huge suspicion.
I wonder how many will do the exam in the autumn. The results are unlikely to come out before Christmas, so won't be much good to anyone wanting to go to university this year or affect what they can do in 6th form. Also I doubt many year 11s or year 13s will have done much work since schools closed in March and they were told exams were cancelled, so is it seems unlikely they could do well enough in an exam to improve their grade.
KS2 data has been considered sceptically in the past but it gives a picture of the ability of a cohort. For any secondary school which has a stable intake from similar feeder schools, any issues with the data are going to be consistent from year to year.
In subjects like Music (probably DT?) that extra term can make a big difference. There is a lot of coursework to get done which takes up much lesson time to complete - I had just gathered it all in when we went to lock-down. From the end of March to the exam in June students often make a lot of progress in their practice exam papers because that is the sole focus of the 6 or 7 weeks.
'Ranking' meant exactly that - we were asked to give each student a number relative to the student above and below. You could not rank any students as equal, nor give any indication as to how 'safe' the prediction was. Last year, two thirds of my class got the same grade (only a small class). I had accurately predicted this but could not have told you what order their marks would fall in. I didn't think any of them were particularly borderline, having marked their coursework - but I would still have been required to make one of them (probably wrongly) the bottom rank.
As this is a one off then surely the best thing would be to give the students the grades they were predicted. Apparently this would give a 10% uplift in grades.
But so what if it does? That’s better than the fiasco that’s unfolding.