Hey, everyone, can Underhill here, Master instructor, a Sigh Berry and this video We're to talk about the grading rubric for question so assessment questions as well as quiz questions. And I'm gonna cover this from both the instructor and teaching assistant side. So I'll be talking through that for both scenarios.
So number one accuracy. We want to make sure that the questions along with answers are actually accurate. So, as an example, if I'm putting down and map commands as part of my question, I want to make sure those commands air actually
accurate, as opposed to
inaccurate information. So one example here if I'm doing a sin scan instead of me doing like dash lower Case s lower case. As for dash capitalist Capitalist, the actual correct syntax would be dash lower case as capital s. So just those minute details that would help a student answer the question appropriately.
So both instructors as you're building questions and teaching assistants as you're reviewing them,
make sure that the information is accurate
and we've got the grading scale here. So if it's partially accurate,
if ah only a small amount of it is accurate, then we've got various creating skill here for you to use.
The ultimate goal here would be to make sure that everything is accurate before questions actually published out to the site for a student to use
content matching now, this is
two ways this could be done. One common way would be that their courses already live on the cyber e site and in that example is a teaching assistant or even instructor. If you're helping out, you would be reviewing the videos and trying your best to match up the question or the question that you're creating to a specific time stamp in the videos.
Now. The other side of that is, if the courses in progress. But the instructors created some questions. As a teaching assistant, you might try your best to match up the questions that were created to a specific time stamp.
And yet another scenario. There is a group of questions built off certain
que es eso knowledge, skills and abilities. So, as an example, if you've worked on like this nice work roll assessment project for us, what you'll notice is that we'll give you a list of K essays and then you build questions off those So at that point, there are no videos for the most part filmed yet, and there is no course created, but you are still just creating some questions, too.
So in that example, there here's what you're going to do is you're going to try your best to
with any courses that we provide you that would cover those cases. You'll try your best to relate those back to a time stamp on the video. Now. Sometimes you can't always do that. There might be multiple videos that contain the same subject, and a student really need to watch. Like all three or five or whatever the videos and that example, you would just use a grading scale and just go down to whatever is relevant here.
But best case in aerial, we want to do with the time stamp
sort of that worst case scenario. We want to do it to a module in the course. So that way students know if they get a question wrong. What module? To go back to?
Quality of the incorrect answer choices. So what we want to do here, when we're creating questions or grating questions, we want to make sure that the question is challenging enough for students. So as an example, if I'm using certain syntax for maybe a programming question or a using a particular particular tool or something like that, I want to make sure that I just slightly changed
the syntax. So I want to change it just slightly for my incorrect answers to the incorrect syntax.
That way, if a student hasn't really paid attention and really studied,
they'll probably choose the wrong answer there because they really don't understand what they're supposed to be learning, right? So that's what we want to do, that we want to put information and those incorrect answer choices that really makes a student think. Is this really correct or not
answers and explanations? You want to make sure you're thoroughly answering the questions and providing examples.
So as an example, instead of
if I do an end map scenario again, you could tell I like in map. But if I do an EMF scenario again, if I want to use the correct syntax and three and then use the incorrect syntax and three questions, excuse me three answers and use the correct syntax and one answer What I want to do is I'm gonna explain why those answers are wrong, right? And
a better example would be different commands an end map.
And so I want to make sure that explain the answer choice like Okay, well, this one is wrong. Why is that wrong
and give some context behind that. Well, this command is actually used for this. And that's why it's wrong in this situation.
So just make sure you're explaining the answer's properly in offering references if you need to.
Time spent this one's more. So ah, later on, thing for the Cy Bree on our end to measure this. But this is something just to kind of keep in mind as you're building questions. You wanna again going back to the difficulty right of the incorrect answer choices. You just want to make sure that students are spending enough time on the questions
where they have to actually think through them. So again, not something you'll have to grade on the rubric. We just have it in there
as sort of a future case situation. But it is a good thing to kind of use and tuck in the back of your head as you go through it. Just ask yourself, Is this difficult enough for someone? Did I have to spend any time thinking about this? Where was it? Just hey, is is ethical hacking Good thing true or false? Right?
That's not really difficult. So we want to make sure the questions and answers are difficult enough to make a student take a few moments to think through it and, you know, maybe 15 to 30 seconds, maybe 60 seconds or so, really, just to try to make somebody think through it. So ask yourself those questions as you review somebody else of questions. And even as you build your own questions to make sure that they're difficult enough
that somebody would have to actually think through them,
I'm gonna skip over the question quality again. That's kind of going back to, um, the metrics that will measure on our side of cyber there. How many people are actually getting this wrong or right et cetera
now with your questions themselves who want to add in inner interactivity? So you want to make them interactive four students and it's kind of challenging to do so with just a multiple choice top of question, but the way we make them or interactive weaken Addison pictures, diagrams, graphs,
even case studies, real world examples. So if you've really experienced whatever you're writing a question about in your own job, then by all means share your story, you know, change your name. If you want to just share those stories, it just makes the question itself more interactive. Seems kind of weird toe have that as a suggestion and as a part of the grading rubric. But
it makes perfect sense when you think about
we're doing on demand courses and we want to make them is interactive. It's possible for a student, so we want to make the best experience possible
now the understanding poor part right here. This is more so for the teaching assistant side. So as you review questions that someone else is built,
you wanna ask yourself, Have I do I really feel like after I've gone through these questions and looked at the answer explanations?
Do I feel like I've mastered this material or that I could master this material based off this information? Or is it pretty lacking? Is just telling me true or false and I don't get any explanation. So those are the types of questions you'll just want to ask yourself is You're looking at the understanding part of it.
And, of course, if you're using any diagrams, pictures, et cetera that you didn't actually create, be sure to cite those. So that's you See there. That's basically a yes or a no type of grading system. So they either get all the points because they've sighted things or they didn't cite anything properly, and they won't get a pass on that particular portion of it. So
I want to review on these questions, and when you're doing the question to building them, make sure you think through okay, do I need to cite this source? Is this a picture I created, or is this something I grabbed off line? Even though you might find it in a Google search, it still may be some think operated, and it might be something that you need to cite. I always just play it safe, and I just cited It's not a big deal. Toe
put the website in there real quick. You know, image
image, courtesy of whatever you know, google dot com or whatever. So I think that's a very valuable thing to make sure you put in there.
So in this video is talk through this question rubric again, this is used for both instructors and teaching assistant. So as you're building questions as an instructor or teaching assistant, you want to make sure you go off this criteria. And as you're a teaching assistant reviewing questions,
you want to make sure you're grading this appropriately and making sure that you actually understand the material based off the questions and explanations provided.