Social Engineering Awareness: Evaluating Employee Training
Effectiveness in Preventing Phishing Attacks
Contents
2. Identification and Description of Research Approaches
3. Critical Evaluation of Research Approaches
Positivist Approach: Strengths, Weaknesses, and Suitability
Interpretive Approach: Strengths, Weaknesses, and
Suitability
Pragmatist Approach: Strengths, Weaknesses, and Suitability
4. Justification for Selected Research Strategy
2. Literature
Review Framework
3. Key Themes in
Training Effectiveness
4. Critical
Analysis of Training Methods
5. Evaluation
Metrics and Assessment
7. Research Design
Implications
Part 1
1. Introduction
Phishing is a type of social engineering attack and is
currently one of the most common threats acting on different computer systems’
flaws, taking advantage of the human factor. Phishing attacks aim to pressure
workers to disclose personal or company-related information or gain
unauthorised entry into an organisation’s system, which has devastating
effects. Organisations spend a lot on training programs to mitigate such risks
to improve users' sensitisation to such scams. However, research shows that
different training programs are effective differently, thus the necessity for
measuring the effect of such training programs on employees’ ability to
identify and avoid phishing schemes.
Within this study, teaching employees on the Phishing
Response Protocol is evaluated by how well it leads to real readiness for the
workplace. Because of this, a secondary qualitative analysis design will be
used for this study because data sources that already exist will be used to
find out how workers felt about and experienced phishing training. Secondary
qualitative analysis that doesn't work well is good for this because it lets
look over the material again to find trends, themes, and results. Using this
method will help us learn more about how the employees feel about the training,
which will make the cybersecurity training classes more useful.
2. Identification and Description of Research Approaches
Positivist Approach
The collection of knowledge is designed
to stem from graphical evidence, where such evidence must be quantifiable and
hence tabulated for analysis. Positivism tends to use measures and variable
analysis techniques of positivism as it mainly uses quantitative research
methods. The positivism approach, when assessing the employees' training
effectiveness on phishing, would employ measurable factors, such as the
frequency of identified phishing before and after training. This approach could
involve data gathered from surveys with questionnaires or experiments observing
employees’ reactions to mock phishing attempts.
Following the ideas of positivism, this work could easily
fit with the measures of training effectiveness to use in this context, such as
how much better the employees are at spotting phishing scams. This lets the
research accurately evaluate training results using statistical tests, which in
turn give real proof of training effectiveness. Within the positivist paradigm,
some of the data collection methods include pre- and post-training tests,
structured questionnaires with closed-ended questions, and experiments that
give numerical data. This way, research findings can be made more empirical by
finding out how much the employees learn or how much their response accuracy
rates change after the training.
Interpretive Approach
In contrast to positivism, interpretivism focuses on
things like people's experiences and perceptions rather than measurable
factors. It is often used with quantitative research methods like surveys and
simple questionnaires, and qualitative analysis of the results, which focuses
on the participants' beliefs. Because of this, an interpretive approach would
be useful in this study looking at the effects of phishing prevention training
because it would help the subjects understand how much training and related
advice could improve them and how well it could help stop real-life phishing
incidents from happening (Alharahsheh and Pius, 2020). This approach would ask
the employee what effect they were able to make regarding the training, whether
they were able to self-train, and how confident they are to report any phishing
threats or attacks.
Interpretivism tries to understand how different people
see things. It works well with objective measures of employee training program
performance as well as organizational and perceived confidence and readiness.
Not getting detailed and descriptive data from respondents makes it harder to
learn more about the parts of training that employees like the most.
Interpretivism data collection methods often include interviews, focus groups,
polls, or other questionnaires with open-ended questions that let employees
give their opinion. This way, they can find out how the people in the
organization understand the training and how ready they are to stop phishing.
Pragmatist Approach
While the pragmatist research paradigm is generally seen as fairly flexible and
focused on the practical side, it often involves using both quantitative and
qualitative methods to look into research questions and problems (Allmark and
Machaczek, 2023) It is best to use the pragmatist approach when working with
research questions that need to look at both quantitative results and
qualitative feelings, since it is not limited to one method. For example, in
the study on phishing prevention training, the pragmatist approach could
include both quantitative and qualitative feedback on the training process as
well as a qualitative analysis of the results.
Taking a pragmatist view on how well training is working lets
people look at both numbers and people's opinions at the same time. This helps
answer both the simple question of whether the training works or not in terms
of numbers and the more complex question of why it should work or not work
according to the employee (Dalkin et al., 2021).
In the pragmatist tradition, there is usually a mix of the
two. For example, surveys before and after training (quantitative) and
follow-up interviews or focus group discussions (qualitative) are common. This
can give both qualitative and quantitative information about training, which
will give a full picture of how well the training program worked.
3. Critical Evaluation of Research Approaches
Positivist Approach: Strengths, Weaknesses, and
Suitability
The positivist approach, which follows the principles of
accurate quantification, provides the most precise model for assessing the
effectiveness of the prevention of phishing training. Its major advantage is
that it can generate measurable results that are easy to compare with the
performance before and after training. Such statistical analysis gives
quantifiable evidence of training efficacy, which allows concrete results that
people can understand and use in… decision-making.
However, there are some limitations to the produced
positivist research, namely, the lack of focus on understanding employees’
experiences and attitudes (Park et al.,2020). Training results may be
reduced to only the numbers, omitting the aspect that affects learning,
emotions, and attitudes the learners develop towards the respective training.
This lack of depth is a drawback when trying to get a broad picture of how
training influences the behaviour of employees and their level of confidence
with regard to phishing.
However, positivism is good at judging measurable results,
but it might miss the qualitative issues needed to find out how ready employees
are. Because of this, it shouldn't be used by itself to judge how well training
against phishing worked, even though qualitative information can be useful
(Bonache, 2021).
Interpretive Approach: Strengths, Weaknesses, and
Suitability
The interpretivist paradigm is great for finding out how
employees understand and make sense of phishing prevention training because
employees are good at making sense of what's going on around them. This method
works best because it continuously collects self-generated impressions and
contextual factors such as employee confidence level, difficulties encountered,
and how they relate to training materials. By looking at things in this way,
interpretivism should be able to find gaps in the training material and methods
(Brendel et al., 2021).
However, the interpretivist approach has some pros and cons,
just like any other type of research. For example, it can be hard to generalize
interpretivism's information collection because it is based on the unique point
of view of each participant. Also, it's not always easy to draw broad
conclusions from interpretivism's results because they are context-dependent
and hard to directly apply to organizational policy (Park et al., 2020). The
interpretivist approach is good for trying to understand how effective training
is from the participants' points of view, so it is appropriate for this
research project. However, the lack of measurable data may not make it so
useful for judging the overall effectiveness of training on a larger scale.
Pragmatist Approach: Strengths, Weaknesses, and
Suitability
People have said that both positivism and interpretivism
are flawed because they can't explain the variety of human relationships and
interpretivism can't use statistics to look at data. The pragmatic approach
lets them be more creative with how they choose to study the topic and how to
measure the results of their training programs. It also lets them use both
quantitative and qualitative data to look at how the employees feel about the
events that happened. It's helpful when the study question needs both
quantitative and qualitative data because it gives a big picture (Kelly and
Cordeiro, 2020).
Putting together different kinds of data may take more
time during the analysis, which is a problem with the pragmatist method. Using
both personal and quantitative data could also make it hard to figure out what
the results mean because they might not fully support each other.
When it comes to phishing prevention training, the
pragmatist method works well because it is adaptable enough to record the
results of the training and learn how the employees felt about it, which gives
a more complete picture of how well it worked and where it might have been
lacking.
4. Justification for Selected Research Strategy
It is best to use an interpretivism research approach for
this study because the research questions and topics are very similar to the
framework of interpretivism. This study is important because how well phishing
prevention training is used depends on how confident, interested, and involved
employees are with it. Since we are looking into judgment and perception, which
are interest-getting instincts and psychological factors that employees have,
this approach will give us the depth we need to properly break these goals
down.
This strategy works best with a secondary qualitative
analysis because it lets the researcher draw conclusions from qualitative data
that has already been collected. This saves both time and money, and
fine-grained data can be gathered from employees' opinions about phishing
training. When qualitative data is analyzed again, new patterns and themes can
be found that give more information about how the training affected the
employees' readiness and show possible areas where the program could be
changed. For these reasons, the secondary qualitative analysis method along
with interpretivist paradigms seems to be the best way to meet the study's
goals and explore the specifics of how effective the training is.
5. Ethical Considerations
To do ethical research on secondary sources, it is
important to protect the data in secondary sources. Participants' right to
remain anonymous and the idea of informed consent are both important,
especially when using archival data. Researchers must show that all data used
in the study was collected with permission from participants who agreed to
remain anonymous. Another important part of the study is confidentiality and
data security to make sure the study's credibility. The researcher has to be
very careful with personal information by making sure that any data collected
doesn't get into the wrong hands or be used in the wrong way (Dalkin et al.,
2020).
Reduction of Psychological Harm is another ethical issue,
since some participants have provided emotional details in the raw data.
Handling of such data requires caution when working with materials that may
influence the emotions or image of the participants involved.
Last but not least, Avoiding Bias in Interpretation must
be good in secondary qualitative research because of the interpretation and
analysis of data. The researcher has one major responsibility, and that is to
ensure that his/her data is not contaminated by the researcher's personal
biases, which are likely to give a distorted view of the participants. In
relation to the ethical issues above, it will be possible for the study to
adhere to high ethical standards, as well as respect the rights and privacy of
the participants.
6. Conclusion
A comparison of the two research approaches showed that
the interpretive paradigm is best for finding out how phishing prevention
training works. This is because it focuses on comparing and quantifying
subjective experiences, which is relevant to the study's goals and objectives
as well as the desire to learn more about how the employees felt. The chosen
method, secondary qualitative analysis, allows for a thorough revisit of data
that has already been collected and analyzed, providing important insights
without the need for new primary data. However, this study can still provide a
broad view of the impact of training by using interpretivism to find out what
works and what needs to be changed about organizational phishing awareness
campaigns.
Part 2
1. Introduction
Phishing attacks are still a real
problem in cybersecurity. They take advantage of people's flaws to trick them
into giving up personal information. Employers know that the person is still
vulnerable, so they step up teaching to keep phishing from becoming a problem
for businesses. The goal of this literature study is to talk about what is
known about the methods used in anti-phishing training and how well they work
at lowering the number of people who fall for phishing. The goal is to talk
about the main topics of training, look at different ways of training
critically, and decide what is used to measure the success of training. New
research backs up the idea of training, but more studies are needed to find out
how long behavior changes last and how they relate to organizational culture
and psychology ideas. Recently, there has been a lot of research on training.
This review of the most recent articles aims to highlight the training methods
and pose questions for future research in order to make the phishing training
more effective, long-lasting, and conclusive.
2. Literature Review
Framework
The literature review used articles
from peer-reviewed journals and other relevant databases. For this study, only
papers released between 2019 and 2024 were looked at. The most relevant studies
were chosen in a planned way, with a focus on phishing, training employees, and
measuring success. For this cross-sectional analysis, it was best to use works
that had been reviewed by other researchers. Out of a large group of papers
that talked about training methods, psychological aspects, the use of
technology, and practical problems in business settings, ten were chosen.
The literature is used to help this
paper come up with main themes, which are then backed by studies that do a
comparative analysis. This approach looks at both common and research-based
ways to teach people how to spot phishing scams. Among the articles, Jampen et
al. (2020), Sarker et al. (2024), Marshall et al. (2023), and Afridi (2024) all
talk about different ways to train people, how to measure results, training
actors (Desolda et al., 2021; Karamagi, 2022), and how organizational culture
affects training (Aldawood and Skinner, 2019). This organized approach helps
look at different issues and their plans for effectively protecting against
phishing by training employees.
3. Key Themes in Training
Effectiveness
The literature research also
identifies some difficulties regarding the effectiveness of anti-phishing
training. Training methodologies vary significantly, with conventional training
contrasting sharply with contemporary and interactive approaches such as
simulation training and gamification training (Jampen et al., 2020; Sarker et
al., 2024). Research indicates that active or engaged learning is favored since
it enhances students' comprehension and grasp of contextual knowledge (Marshall
et al., 2023). The training involving the dissemination of simulated phishing
emails to an organization is considered effective, since it closely mimics
real-life scenarios and enables employees to recognize indicators associated
with cyber phishing (Afridi, 2024).
Two notable subjects encompass the
assessment of efficacy. Numerous research indicate that measurements encompass
the quantity of clicks on links within simulated phishing emails, the responses
reflecting awareness levels, and the retention score throughout the duration of
the study (Afridi, 2024). Nadeem et al. (2023) highlighted that the durability
of these measurements remains uncertain due to the frequent lack of
standardization in the metrics.
Age, experience, and emotional
reactions to training are significant aspects in the consideration of human
elements in training. Desolda et al. (2021) and Karamagi (2022) demonstrate
that vulnerability to phishing is influenced by cognitive biases and emotional
sensitivities. These frauds typically involve appeals to fear or time
constraints that render psychological defense training prudent.
Furthermore, organizational culture
is a determinant of training efficacy, as elucidated by Aldawood and Skinner
(2019). An effective cybersecurity organizational culture fosters the
implementation of best practices, hence mitigating the risks associated with
phishing attacks. Creating such a culture necessitates ongoing dialogue,
executive support, and a feedback system grounded in the principle that
knowledge is power, rather than punitive authority.
4. Critical Analysis of
Training Methods
A look at the pros and cons of both
traditional and modern training methods shows that while educational phishing
scenarios taught in lectures and workshops teach people the basics of phishing,
they may not be motivated to make a change (Jampen et al., 2020). More complex
methods, such as simulation-based training, are becoming more popular because
they are more interactive. These let employees experience real events and help
reinforce what they have learned (Marshall et al., 2023). Adding competition
and personalized rewards motivates learners to learn and improve their ability
to spot phishing emails. According to Sarker et al. (2024), gamification should
be used more in the training process because it seems to be more effective than
traditional training with younger employees.
The amount and regularity of
training are still very important. Similar to the study, Sarker et al. (2024)
and Afridi (2024) said that short training sessions should be held at work
because they are a better way to tell workers to stay alert without putting too
much stress on them. In line with the suggested model, Marshall et al. (2023)
also say that the feedback frequency after simulation is very important because
it lets mistakes be fixed and makes it easier to spot phishing attacks.
But these kinds of models and games
might be expensive and time-consuming to use, so they might not be right for
small businesses (Aldawood and Skinner, 2019). Another issue is keeping
students interested all the time, since some studies show that training loses
its effectiveness after a while (Desolda et al., 2021).
5. Evaluation Metrics and
Assessment
As a result, there are both
short-term and long-term ways to judge whether the training made people less
likely to fall for phishing scams. Performance metrics include the number of
clicks on fake phishing emails, the number of reports of phishing, and quiz
scores from the post-training tests by Afridi (2024) and Marshall et al.
(2023). These metrics let companies see how much their employees understood
right away by measuring the impact of the training.
Some ways of evaluating include giving
tests before and after training to see how much was learned and how well it was
remembered. Nadeem et al. (2023) say that step-by-step access and check-up
tests should be given a few months after the initial training to see if the
person can remember what they've learned and use it in real life.
For students to stay in the program,
they have to do self-checks and organized reviews of real phishing incidents
that happen in an organization over time. Sarker et al. (2024) say that
continuous reporting is a good way to keep an eye on workers' work and change
their training based on what they need help with.
6. Gaps and Contradictions
The study of the literature shows
that there are a number of gaps and contradictions in the current research on
how effective anti-phishing training is. At one point, Jampen et al. (2020)
pointed out problems with studies that used age and experience as independent
predictors of training in demographic factors. Also, keeping the behavior
change going over time hasn't gotten enough attention, even though research on
the effectiveness of repeatedly repeating the training lessons is mixed
(Marshall et al., 2023). There is also not a lot of information available on
psychological and cultural factors like worry and organizational support. These
gaps should be filled in by future research to give us a better idea of how to
best protect ourselves from fake attacks.
7. Research Design
Implications
The results show how important it is
to use complex research methods that include watching how employees behave and
then asking them questions about it to figure out how well training is working
in the long run (Taherdoost, 2024). The main idea is that it's important to
come up with ways to measure not only a person's skills but also their personality
and the atmosphere at work in order to make training that works.
Simulation-based methods and game elements can help make training more fun and
useful. Future studies should try to come up with a clear set of metrics to
measure anti-phishing training efforts so that the field is more consistent and
it's easier for researchers to compare results from different companies.
References
Afridi, S., 2024. Revolutionizing Phishing Defense: A
Comprehensive Overview of Defense Mechanisms and Measurement Strategies.https://www.researchgate.net/profile/Sheeba-Afridi/publication/384356492_Revolutionizing_Phishing_Defense_A_Comprehensive_Overview_of_Defense_Mechanisms_and_Measurement_Strategies/links/66f57efa9e6e82486ff09a6f/Revolutionizing-Phishing-Defense-A-Comprehensive-Overview-of-Defense-Mechanisms-and-Measurement-Strategies.pdf
Aldawood, H. and Skinner, G., 2019. Reviewing cyber
security social engineering training and awareness programs—Pitfalls and
ongoing issues. Future internet, 11(3), p.73.https://www.mdpi.com/1999-5903/11/3/73/pdf
Alharahsheh, H.H. and Pius, A., 2020. A
review of key paradigms: Positivism VS interpretivism. Global Academic
Journal of Humanities and Social Sciences, 2(3), pp.39-43.https://gajrc.com/media/articles/GAJHSS_23_39-43_VMGJbOK.pdf
Allmark, P. and Machaczek, K., (2023). ‘Realism and Pragmatism in a mixed
methods study’. Journal of advanced nursing, 74(6),
pp.1301-1309.https://onlinelibrary.wiley.com/doi/abs/10.1111/jan.13523
Bonache, J., 2021. The challenge of using
a ‘non‐positivist’paradigm and
getting through the peer‐review process. Human
Resource Management Journal, 31(1), pp.37-48.https://onlinelibrary.wiley.com/doi/abs/10.1111/1748-8583.12319
Brendel, A.B., Lembcke, T.B., Muntermann, J. and Kolbe, L.M., (2021).
‘Toward replication study types for design science research’. Journal of
Information Technology, 36(3), pp.198- 215.
https://doi.org/10.1177/02683962211006429
Campbell, S., Greenwood, M., Prior, S., Shearer, T., Walkem, K., Young,
S., Bywaters, D. and Walker, K., (2020). ‘Purposive sampling: complex or
simple’? Research case examples. Journal of research in
Nursing, 25(8), pp.652-661. https://doi.org/10.1177/1744987120927206
Chaudhary, S., Gkioulos, V. and Katsikas, S., 2022.
Developing metrics to assess the effectiveness of cybersecurity awareness
program. Journal of Cybersecurity, 8(1), p.tyac006.https://academic.oup.com/cybersecurity/article-pdf/8/1/tyac006/47918864/tyac006.pdf
Dalkin, S., Forster, N., Hodgson, P., Lhussier, M. and Carr, S.M.,
(2021). ‘Using computer assisted qualitative data analysis software (CAQDAS;
NVivo) to assist in the complex process of realist theory generation,
refinement and testing’. International Journal of Social Research Methodology,
24(1),
pp.123-134.https://www.tandfonline.com/doi/abs/10.1080/13645579.2020.1803528
Desolda, G., Ferro, L.S., Marrella, A., Catarci, T. and
Costabile, M.F., 2021. Human factors in phishing attacks: a systematic
literature review. ACM Computing Surveys
(CSUR), 54(8), pp.1-35.https://www.researchgate.net/profile/Roberto-Andrade-7/publication/379714394_Human_and_Cognitive_Factors_involved_in_Phishing_Detection_A_Literature_Review/links/661a862139e7641c0bbe71a9/Human-and-Cognitive-Factors-involved-in-Phishing-Detection-A-Literature-Review.pdf
Jampen, D., Gür, G., Sutter, T. and Tellenbach, B., 2020.
Don’t click: towards an effective anti-phishing training. A comparative
literature review. Human-centric
Computing and Information Sciences, 10(1),
p.33.
Karamagi, R., 2022. A Review of Factors Affecting the
Effectiveness of Phishing. Computer and
Information Science, 15(1).https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=4355455
Kelly, L.M. and Cordeiro, M., 2020. Three
principles of pragmatism for research on organizational processes. Methodological
innovations, 13(2), p.2059799120937242.https://journals.sagepub.com/doi/full/10.1177/2059799120937242
Marshall, N., Sturman, D. and Auton, J.C., 2023. Exploring
the evidence for email phishing training: A scoping review. Computers and Security, p.103695.
Nadeem, M., Zahra, S.W., Abbasi, M.N., Arshad, A., Riaz,
S. and Ahmed, W., 2023. Phishing attack, its detections and prevention
techniques. International Journal of Wireless
Security and Networks, 1(2),
pp.13-25p.
Park, Y.S., Konge, L. and Artino Jr, A.R.,
2020. The positivism paradigm of research. Academic medicine, 95(5),
pp.690-694.https://journals.lww.com/academicmedicine/fulltext/2020/05000/the_positivism_paradigm_of_r%20esearch.16.aspx/%22
Sarker, O., Jayatilaka, A., Haggag, S., Liu, C. and Babar,
M.A., 2024. A Multi-vocal Literature Review on challenges and critical success
factors of phishing education, training and awareness. Journal of Systems and Software, 208, p.111899.
Taherdoost, H., 2024. A Critical Review
on Cybersecurity Awareness Frameworks and Training Models. Procedia Computer Science, 235,
pp.1649-1663.https://www.sciencedirect.com/science/article/pii/S1877050924008329/pdf?md5=f4df9eb1d872067c3e9398a37c809fd5&pid=1-s2.0-S1877050924008329-main.pdf
References
Alabdan, R., 2020. Phishing attacks
survey: Types, vectors, and technical approaches. Future internet, 12(10),
p.168.https://www.mdpi.com/1999-5903/12/10/168
Alkhalil, Z., Hewage, C., Nawaf, L. and
Khan, I., 2021. Phishing attacks: A recent comprehensive study and a new
anatomy. Frontiers in Computer Science, 3, p.563060.https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2021.563060/full?ref=hackernoon.com
Desolda, G., Ferro, L.S., Marrella, A.,
Catarci, T. and Costabile, M.F., 2021. Human factors in phishing attacks: a
systematic literature review. ACM Computing Surveys (CSUR), 54(8),
pp.1-35.https://dl.acm.org/doi/abs/10.1145/3469886
Jain, A.K. and Gupta, B.B., 2022. A survey
of phishing attack techniques, defence mechanisms and open research
challenges. Enterprise Information Systems, 16(4),
pp.527-565.https://www.tandfonline.com/doi/abs/10.1080/17517575.2021.1896786
Do, N.Q., Selamat, A., Krejcar, O., Herrera-Viedma, E. and Fujita, H., 2022.
Deep learning for phishing detection: Taxonomy, current challenges and future
directions. Ieee Access, 10, pp.36429-36463.https://ieeexplore.ieee.org/abstract/document/9716113/
Zieni, R., Massari, L. and Calzarossa, M.C.,
2023. Phishing or not phishing? A survey on the detection of phishing
websites. IEEE Access, 11, pp.18499-18519.https://ieeexplore.ieee.org/abstract/document/10049452