Another interesting concept to be aware of is correlation versus causation. This is the idea that in our minds, sometimes we see two events that have happened,
and we assumed that one caused the other. This is a easy mistake to make sense. Sometimes the data seems to fit so well. And on Lee, after some additional
analysis and thinking this is it obvious that an error has been made?
interesting website that shows this concept
in a kind of humorous way.
Hilarious graphs Cruising Proving correlation isn't causation,
says from fast co design dot com.
So the divorce rate main graft against the per capita consumption of margarine. The data seems to align, but probably no connection there.
Cover people drowned by falling in a swimming pool
also seems to be related to the number of films. Nicolas Cage appeared it
This one's really good US spending on something space and technology compared against suicide by hanging strangulation suffocation.
Granted, these don't have anything to do with
sever threat intelligence, but you see the concept very, very visually there, so it should bring us back to the idea that when the facts appear to be related, there should be stronger evidence to support that. Instead of just some visualization tool seeming to show the indication,
then we have emotional reasoning
and that this is pretty obvious. I mentioned this little earlier. A CT I analyst needs to be objective needs to be logical, and they should not be giving into emotion to reach some conclusion.
Just because you want something to be true
shouldn't affect your ability to consider the facts correctly and also to consider that what you want to be true. If it doesn't turn out that way, that that shouldn't matter. From an emotional point of view, Getting emotionally invested into a conclusion can be a dangerous thing, and it has to be
identified and corrected. Intuitive thinking is also in a similar vein. You know the saying I had 100. I want to trust my gut. These are things that can certainly help us and serve us well in life as people.
But it may cause problems in a logical analysis.
There are certainly times where intuitive thinking can can allow someone to take a leap of logic
and reach a conclusion that appears to be correct,
and it feels right and that the gut check says it's okay,
but it turns out to still be flawed in some way. So it is something to watch out for it to be aware of.
Now I could think about logical errors, the ad hominem argument. This is a fancy way of saying that that you, as an analyst, might be trying to find reasons to not agree with a conclusion. Let's say a another researcher presents their findings, and they say that
we know that this tractor took this action and
and this was the attack that resulted. You might think to yourself that the information's probably correct, but you don't like the conclusion. And and Haman and argument in this case would be to attack the researchers credibility, to attack the researchers methodology,
maybe to try to cast doubt on
whether their conclusion is correct.
So it becomes more about the person than the actual evidence that they're presenting. We see this kind of thing happens all the time in politics, for instance,
where someone says something and other people don't like the message to the attack. The messenger. It is a common failing of human nature, and we should be able to identify that when it happens to tryto prevent it from becoming part of the
prevent the person from becoming part of the argument. Similarly, all or nothing thinking can also be a human failing,
where we tend to in some cases, think in black and white. It's either yes or no zero or one. Either this all happened or none of it happened, and any kind of logical analysis or are even training in subjects like discrete mathematics, for instance,
shows us that very seldom can we say all of this or none of that
or this happens all the time or versus something that happened some of the time. It can really pay to do some of that kind of research because it gives it an advantage. Speaking clearly and plainly and also logically, knowing when
there is a shade of gray in between the black and the white is the idea here
the appeal to authority.
This is used in all kinds of situations. Quite often, it's used by social engineers as well. To say good example in a social engineer context would be to say that
I really need you to help me and this, uh, supervisor or your boss also wants you to do this and they think it's a good idea. And therefore this appeal to some authority figure makes the person in that that's being asked to do something or being pressured into do something, feel more obliged to do this.
Maybe in the CT I context,
it might turn up as an example where an analyst says, I've got this great information. It came from a reputable site like us, sir Duck up, for instance. We already looked at that site a few times during this course.
Someone else might say, Oh,
well, I don't think that's that information's correct. I read somewhere else that
that a different story happened,
whereas the first person might say, Yeah, but it's It's us, sir. Doc up. It's got to be correct.
They are, you know, the trusted source of this information. And so they use that that stature or the status of where the information came from, as a way to try to prove its correctness,
and we can see easily how this could go down the wrong path. So it is something to think about. Labeling
a mistake that humans make trying to say that
after some initial analysis or some initial understanding that this is such and such, and that is some other thing. It's a natural way that our brains try to organize information, of course, but just because they
let's say, a person of interest in an investigation,
you might label them as the bad guy. But that hasn't been proven yet.
the person of interest is is being investigated. Their activities are being investigated in relation to some incident.
But until there's conclusive evidence showing that they took part or they are guilty in some way,
they should be labeled with some neutral
moniker, like a person of interest, instead of saying that's that's our threat actor, That's our bad guy. This is a easy, easier thing than some of these other ones that we've talked about to identify in our way of thinking,
just to be careful, that if you would sign a label to something, that it should be supported by evidence and not just some of these other things that we saw, like emotion or incorrect conclusions due to false in our logic, much like the appeal to authority We have the appeal to consensus where
maybe you've got a situation where there's 10 people working
some analysis and they go to
present the information
The conclusion is drawn into question because someone doesn't like it. They might say, Well, I know there's 10 people that have worked on this, but this doesn't seem to add up. I don't like this. I don't think it's right. This is be the consumer of the intelligence saying this, of course, and the person presenting the intelligence would say, Well,
we've all 10 of us agree that this is correct. We will study that we've all worked very hard.
Therefore it must be correct, which is not just saying that the information isn't correct, but using the fact that that a bunch of people agree on something doesn't make it more correct or more likely to be true.
That's a that's a logical error and are thinking,
Ah, the appeals ignorance. Also trying Thio
understand how someone might be persuaded
to take a certain viewpoint because they just don't know enough about it. Trust me, I'm the expert right. This is a technique that humans use all the time
again in social engineering examples that could be very useful as a one of the tools in the bag of tricks. But in the
in the analysis of cyber threat information,
taking someone's word for it because you don't know better or because you don't have enough experience with that
might be a mistake. You could think about
the famous Russian proverb Trust, but verify this means you're not disrespecting the person. You're just going to do additional checking to make sure that what they told you that you weren't so sure about is actually true. And then the last item on our list is the selective argument.
This is another phrase for the term cherry picking,
where I've got 14 different faxed to choose from to prove my point.
And I picked just these two because they're the ones that are the most convincing and the most persuasive the other 12 I might choose to ignore, because I'm very selectively
choosing those things which which helped me to achieve my goal.
So as an analyst, you'd have toe watch out for this as well
evidence just because you don't favor it or because you you're not sure of its value that would be carried picket. All right,
let's move on to the next section