Exploring user behavioral data for adaptive cybersecurity

This paper describes an exploratory investigation into the feasibility of predictive analytics of user behavioral data as a possible aid in developing effective user models for adaptive cybersecurity. Partial least squares structural equation modeling is applied to the domain of cybersecurity by collecting data on users’ attitude towards digital security, and analyzing how that influences their adoption and usage of technological security controls. Bayesian-network modeling is then applied to integrate the behavioral variables with simulated sensory data and/or logs from a web browsing session and other empirical data gathered to support personalized adaptive cybersecurity decision-making. Results from the empirical study show that predictive analytics is feasible in the context of behavioral cybersecurity, and can aid in the generation of useful heuristics for the design and development of adaptive cybersecurity mechanisms. Predictive analytics can also aid in encoding digital security behavioral knowledge that can support the adaptation and/or automation of operations in the domain of cybersecurity. The experimental results demonstrate the effectiveness of the techniques applied to extract input data for the Bayesian-based models for personalized adaptive cybersecurity assistance.


Introduction
The need to understand users within any human-computer system has long been identified as a critical design principle by HCI researchers and professionals.In recent years, there has been an increasing interest in the role users play in maintaining security within the digital economy (Canongia and Mandarino Jr 2013;Church 2008;Tsai et al. 2016).
The adoption and appropriate use of security mechanisms by home computer users (hereinafter referred to as users or HCUs) in particular have become a central concern for the usable security research community.Howe et al. (2012) described HCUs users as people who have not received any formal training to use computers but use them to support various tasks in non-work environments.Despite advances in cybersecurity technological solutions, most HCUs are still unable to effectively access them for the protection of their digital assets.As HCUs are increasingly targeted in security breaches (Crossler and Bélanger 2014), there is a consensus among both cybersecurity researchers and key industry players about the urgent need to understand their cybersecurity behaviors and how best to enhance them.
To the same degree that efforts are being geared towards the security of cyberspace, the need exists to make cybersecurity mechanisms equally accessible to the average user.People need to improve their security practices regularly which means they must be willing to learn and adopt the best security policies, and the associated mechanisms to ensure those policies.The National Institute of Standards and Technology (NIST) suggests that the best way of involving everybody is to create incentives that can motivate everyone within the cyber economy (Schwartz 2011).Several usability studies on different types of security controls (e.g.firewalls, anti-virus) have illustrated how usability issues prevent end users from effectively leveraging them for their protection against security attacks (Cheung et al. 2001;Wong 2008).Furnell and Clarke (2012) touched on anti-virus software usability and pointed out that users are faced with more complex interfaces owing to the new trend of integrated internet security suits.Thus the consequent burden of understanding the full set of security functionalities provided through the surrounding options in web browsers has increased.
A reasonable assumption is that improving the usability of cybersecurity mechanisms can serve as a major incentive for users to adopt better security controls and behavior online.However, the adoption of security systems remains problematic partially as a result of security researchers focusing less on the usability of systems within their social context (Church 2008).It is becoming increasingly difficult to ignore the impact of individual differences and other socio-cultural variables when applying usable security design heuristics.Adaptive and/or personalized user interaction design have been proposed as possible ways of addressing usability and acceptability issues related to different user domain and contexts (Akiki et al. 2015;Mezhoudi et al. 2015;Bunt et al. 2004;Jason et al. 2010).For instance, Liu et al. (2016) operationalized Personalized Privacy Assistant (PPA) and found improvements in the acceptability and usability of more suitable permissions recommended for mobile applications users.Here, the concept of a personalized Adaptive Cybersecurity (PAC) implies that security and/or privacy functions for online applications would have to adapt to both contextual changes and individual user preferences or needs.Thus individual differences can influence not just the perceived usability, but also the perceived risk, attitudes and acceptability of how a specified cybersecurity mechanism is designed (Dillon 2001;Holden and Rada 2011).There is the need to further understand the factors that affect users' perceived benefits of security control as well as the dimensions that wholly describe their attitude towards cybersecurity to better support the provision of PAC.
Acquiring knowledge about users and their perceptions is, therefore, a critical step in the process of improving the usability of cybersecurity mechanisms.Previous stud-ies have identified useful insights into users' security behavior by focusing on one or two influential factors from existing cognitive theories such as the Theory of Reasoned Action (TRA) (Lu et al. 2005), the Theory of Planned behavior (TPB) (Ng and Rahim 2005), Diffusion of Innovation Theory (DIT) (Conklin 2006) and the Protection Motivation Theory (PMT) (LaRose et al. 2005;Milne et al. 2009).Our research model explores a wider variety of these dimensions by integrating TAM with PMT to explain and predict individuals' security behaviors.The model is further augmented by introducing attitude to personal data as part of the key determinants of intention to practice cybersecurity.As part of understanding users' attitudes towards cybersecurity, this research focused on the inherent vulnerabilities of web browsers and how users interact with their built-in cybersecurity features (e.g.malware prevention, content filtering, private browsing, password manager, etc.) for security online.
The study combines and applies behavioral science and machine learning (ML) techniques to better support user modeling in personalized adaptive cybersecurity applications.An integrated model of cybersecurity adoption has been developed and tested to determine those influential factors which will impact on people's attitudes to web browser security.Partial Least Squares Structural Equation modeling (PLS-SEM) is applied to analyze empirical data collected using an online questionnairebased survey.The empirical data and findings from the PLS-SEM model then serve as input for building the Bayesian-Network (BN) models for personalized adaptive cybersecurity (PAC).Thus the empirical experimentation with PLS-SEM assisted in determining which variables should be considered to support the personalization capability of the BN.The resulting components and structure of the Bayesian-networkbased model illustrate how cybersecurity assistance can be provided intelligently.

Issues related to the adoption of cybersecurity controls
Cyberspace as an interconnection of web technology, makes the sharing of digital information, products, and services available to a broader range of participants.Cybersecurity is concerned with the protection from unauthorized access and usage of devices, applications, and data that connects to these web technologies through the internet.As more and more things are being attached to networks and connected to the internet (the era of Internet of Things (IoT)), it is becoming quite impossible to separate security on stand-alone computers from cybersecurity.Canongia and Mandarino Jr (2013) defined cybersecurity as: The art of ensuring the existence and continuity of the information society of a nation, guaranteeing and protecting, in Cyberspace, its information, assets and critical infrastructure.This definition broadly views cybersecurity from a national perspective with no reference to personal safety or privacy within cyberspace.Cavelty (2014) made a distinction between national security and human security.He indicated that the former entails actions that affect social functions relying on IT and other critical infrastructures while the latter involves actions affecting acquired values like anonymity, privacy, and other personal freedoms.Craigen et al. (2014) drew attention to the fact that most definitions on cybersecurity miss the interdisciplinary nature of the field and tends to focus on the technical perspective.They posited the following definition after reviewing the literature and engaging with a multidisciplinary group of cybersecurity practitioners from varying backgrounds: Cybersecurity is the organization and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems from occurrences that misalign de jure from de facto property rights.
Accordingly, their proposed definition is aimed at capturing the multiple dimensions of cybersecurity to promote a more interdisciplinary approach in addressing emerging complex security challenges in cyberspace.More often than not, cybersecurity strategies tend to be targeted at protecting national and/or organisational security.Adopting a top-down approach focusing on the higher level especially national security and large corporations, has only led to individuals' security needs being undermined.Therefore, there is the need to systematically balance national and individual security.This research seeks to provide a holistic understanding of the effect of individuals' cybersecurity perceptions, attitudes and/or behaviors.This holistic view of human online security is not just relevant in determining appropriate policies but also in improving the usability of cybersecurity controls and increasing their acceptability among non-expert users.Ross and Johnson (2010) classified security controls into three categories of management, operational and technical countermeasures that are applied to protect the confidentiality, integrity, and availability (CIA) of systems and the information they handle.Operational and managerial controls focus on security risks and incidents that are monitored and managed by people (e.g.usage policies, business continuity planning, training, etc.).Technical controls are mechanisms that use technology-based set-ups such as such as firewalls, anti-viruses, user authentication, encryption technologies, Intrusion Detection Systems (IDS), etc. as system protection measures.As more and more people are able to gather, process, transfer or store sensitive commercial and personal data over the internet, cybersecurity threats are also evolving rapidly.Therefore, achieving the aforementioned security goals of CIA are as vital to the data protection needs of domestic internet users as to corporate and government networks.People generally want to be assured that, nobody will tamper with their information without their consent.People also want their data to be readily available and accessible at any point in time.Unfortunately, any form of data, be it corporate or personal, that is exposed to the internet is at risk of being compromised.Consequently, internet users need to be able to easily adopt and correctly use any available cybersecurity mechanisms to minimize such risks.
However, most non-security expert users find it quite challenging to understand and correctly configure the available security mechanisms to avoid system breaches and cyber-attacks.The usability of security mechanisms has long been identified by computer security researchers as critical to ensuring the protection of information systems (Whitten and Tygar 1999;Zurko and Simon 1996).This is because humans are a key component of any security system yet they are largely considered to be the weakest link in the security.Mitnick and Simon (2011) pointed out that no matter how technically robust a security technology happens to be, an attacker can breakthrough the defences by exploiting the human element.Thus, a cybersecurity mechanism can lose its value if users are unwilling to adopt it or cannot use it due to poor usability thereby having a negative impact on the usability of internet based applications (Cambazoglu and Thota 2013).
There has been little success with incorporating usability guidelines and standards into security-related interfaces.Security-related interfaces in the context of this research refer to the programs that allow users to manipulate security mechanisms on a system as well as controlling the effects of the manipulations and how the security status is indicated.Although several pieces of consumer softwares are now successfully designed to be usable, security applications are still found to be lacking in their user-friendliness.A number of usable security studies (e.g.Hof 2015; Kainda et al. 2010) have made a distinction between usability of security software and non-security software and have argued that usable security design strategies should essentially consider and address inherent properties that make the security domain particularly challenging.Accordingly, different interface design techniques are required for effective security-related interfaces and a special case exists when adopting the prevailing general usability standards for security mechanisms.
Although usability evaluation is critical in determining the proper implementation of security tools, it cannot fully explain and predict actual adoption and usage.Usability which is part of the overall system acceptability focuses on the extent to which the system can be used while acceptability is concerned with how well the system supports the needs and requirements of all stakeholders (Nielsen 1994;Bordo 2010).Thus, an acceptance model is required to explain and predict the acceptability of cybersecurity designs and implementation.Studies exploring acceptability in the field tend to focus on the factors influencing the acceptance of security policies and solutions within organizational contexts.Topa and Karyda (2015) recently reviewed the literature on employee security behavior and classified the factors influencing them as individual, organizational and technical.Accordingly, organizations aiming to improve their security policy compliance are recommended to adopt a holistic approach that addresses issues related to all three factors.However, HCUs may not be able to access such support that may enable them to improve their information security behavior.
Recently, researchers have shown an increased interest in understanding users' security behavior both in the context of corporate and non-corporate settings.Coventry et al. (2014) describe several possible scenarios affecting decision making within the context of cybersecurity differently than in other behavioral contexts.Omidosu and Ophoff (2016) highlighted the need for more studies into the security behaviors of non-corporate computer users based on their systematic review of the extant literature on information security behavior in both organization and home contexts.Accordingly, a considerable knowledge gap exists where the security behavior of individual cyber citizens operating within non-corporate environments is concerned.The findings reported in this paper fill some of the gap by incorporating empirical evidence for actual cybersecurity-related attitudes and behaviors into the development of user models for personalized adaptive cybersecurity.It is critical to assess and ensure both the usefulness and user-friendliness of security tools developed for inexpert security users.In non-corporate settings, technical factors influencing security behavior include the quality, performance, and usability of the technological controls.Consequently, it is becoming increasingly important to focus on making the use of computer security tools effortless.The user model proposed and evaluated in this study for personalized adaptive cybersecurity is geared towards this goal of effortlessness.

Theoretical framework for propositions
Factors affecting the acceptance of various computer technologies have been a central research focus underlying the implementation of computer systems.Davis et al. (1989) determined that resistance to computer technologies aimed at increasing performance can be assessed and addressed with predictive behavioral models.This has led to the development of various models aimed at verifying the effect of identified factors on the acceptance of different kinds of technologies.These factors can be broadly categorized as individual, contextual and system characteristics.Two prominent models designed to predict specific security behavior are the Technology Acceptance Model (TAM) and the Protection Motivation Theory (PMT) (Howe et al. 2012).Our research model, as shown in Fig. 1, integrates components from both these models and includes other factors found to be possible determinants such as value for personalization and attitude to personal data.The model consists of three main components (External Variables, User Perceptions/Attitudes, and Cybersecurity behaviors), and explores how the identified external variables may influence perceived ease of use (PEOU), perceived usefulness (PU), perceived risk (PR), value for personalization (VFP), and attitude Fig. 1 Predictive model for user cybersecurity behavioral intentions.N ote: Components in green and blue boxes represent TAM and PMT components respectively to personal data (APD); and how these can then predict an individual's cybersecurity intentions (BI) and actual cybersecurity practiced and/or behavior (ACB).
The TAM introduced by Davis et al. (1989) has since been adopted in studying and predicting user acceptance of various forms of technology (e.g. Lee 2009;Mun and Hwang 2003;Abdullah et al. 2016).This has led to a substantial amount of theoretical and empirical support being accumulated in its favor and is particularly regarded as being the most robust framework in explaining the adoption behaviors of information technologies (e.g.Venkatesh and Davis 2000).In our work, TAM described the relationships between the users' acceptance, perceptions, and external variables.As shown in Fig. 1, user acceptance is examined by two cybersecurity behaviors-intention to use and actual usage.TAM identifies two considerations in an individual's decision to adopt an information system: Perceived Usefulness (PU) and Perceived Ease of Use (PEOU).Through these, TAM provides a theoretical framework for exploring the effect of external variables on beliefs that are internalize, and their subsequent impact on intentions and actual behavior.According to TAM, PU and PEOU are the primary determinants of the intention to use and subsequent usage behavior.PMT, on the other hand, measures the components of a fear appeal in determining the variables that impact on protection motivation in the form of behavioral intentions.Our study took TAM as a core theoretical foundation and extends it with PMT's cognitive mediation processes of threat and coping appraisal to develop a predictive model.The model is further augmented with two additional user insights related to personalized digital security as primary determinants to empirically assess and predict the user's cybersecurity behavior.These are Value for Personalization (VFP) and Attitude to Personal Data (APD).The ensuing paragraphs provide justification for the inclusion of these determinants in the research model, and related propositions.

Proposition set 1: user perceptions
Beliefs that users have about the usefulness of systems and their ease of use affect their intention to use and usage of the actual system.These perceptions have been extensively explored in previous technology acceptance research, and provide support for the following propositions with regards to web browser security controls (WBSC).

Perceived usefulness (PU) -H1: PU of WBSC is positively related to cybersecurity behavior
In the TAM, perceived usefulness refers to an individual's intrinsic belief about jobrelated benefits, such as productivity, effectiveness, and performance, associated with using new technology.In the context of this research, PU refers to the degree to which a person believes web browser security settings would improve their protection against cyber-attacks.This definition captures both PU in the TAM model and response efficacy in the PMT model.Perceived usefulness has been reported to have a positive impact on the adoption and usage of information systems (Davis 1989;Igbaria et al. 1997;Woon et al. 2005).Woon et al. (2005) found response efficacy (similar to perceived usefulness) significantly impacted home computer users' decision to protect their wireless network.Jeyaraj et al. (2006) reviewed and analyzed empirical studies conducted on IT innovation adoption in the past decade and found perceived usefulness to be the best predictor for behavioral intention.The proposition here is that users are more likely to adopt security measures if they believe the security mechanism provided (in this case web browser security settings) are effective in making them cyber-secured.

Perceived ease of use (PEOU)
-H2: PEOU of WBSC is positively related to cybersecurity behaviors -H3: PEOU of WBSC is positively related to PU PEOU refers to an individual's perception of the cost in terms of time and effort (mental and physical) involved in using a system (Davis 1989).In previous studies, PEOU has been found to have both a direct and indirect effect on behavior through its impact on PU of the technology being investigated.Suh and Han (2003) also discovered that both security concerns and usability dimensions have significant direct and interaction effects on the adoption of smartphones for internet banking.Thus PEOU can influence users' attitudes towards a system application as well as their perception about the application's usefulness during use, therefore impacting on behavior both explicitly and implicitly (Alharbi and Drew 2014;Davis 1989;Venkatesh and Davis 2000).In the context of digital security, Ellis (2009, p. 41) noted that "if security systems are burdensome, people may avoid using them, preferring convenience and functionality to security".There is also empirical support for response cost (similar to PEOU) having a significant negative impact on intention to enable security settings on a wireless network (Woon et al. 2005).It is therefore posited that WBSC that are difficult to use and require a lot of effort to configure will most likely be ignored and/or undervalued by users.

Perceived risk (PR)
-H4: PR about WBSC is negatively related to cybersecurity behavior Threat appraisal is a key aspect of the PMT and refers to the beliefs that individuals form about perceived risk when they become aware of security threats.Their perceived risk is then evaluated against the effectiveness of the coping mechanisms that are made available.PMT includes rewards, severity and vulnerability to explain how threats are perceived.In our model, we consider rewards to be like PU and PR as the degree to which a user feels the uncertainties and negative effects of configuring some web browser security settings in areas of functional, time, information, physical and social risks (Lu et al. 2005).In the literature, perceived risk is considered to be a multi-dimensional construct consisting of different types of risk (e.g.physical, functional, social, etc.) (Jacoby and Kaplan 1972;Kaplan et al. 1974;Lu et al. 2005).This study examined only five types of risk that are considered to be most relevant in the context of security technology adoption.Functional or performance risk describes the potential ineffectiveness of a security mechanism, hence failure to achieve the desired security goals.Time risk refers to the perceived time loss that may occur owing to difficulty in configuring some security settings correctly.Information risk is the likelihood that instructions regarding the correct use of the security mechanism are inadequate/unreliable (risk associated with information failure).Physical risk means the extent to which an individual believes adopting the security technology can protect them against some form of loss, such as data, privacy or any component of the computer system (e.g.hard disk).Social risk describes the possibility that an individual may be worried about losing their reputation in a social group due to the adoption of a security control or technology.
Perceived risk has received considerable attention as a key predictor of consumer behavior within the marketing literature (e.g.Dai et al. 2014;Forsythe et al. 2006;Forsythe and Shi 2003).The construct has also been integrated into various predictive models and has been found to have significant impact on technology adoption behavior (e.g.Bélanger and Carter 2008;Featherman and Pavlou 2003;Lee 2009;Özkan et al. 2010).However, far too little attention has been paid to it as a possible predictor of cybersecurity behavior.The article by Lu et al. (2005) is one of the few studies that examined and found that perceived risk impacted on intention to adopt an Online Anti-Virus program through PU and attitude towards use.More recently, Chang (2010) proposed an extended TAM model that includes risk-related factors for the prediction of managerial attitudes towards the adoption of security technologies within an organization.Based on findings of the significant effects of PR in previous technology adoption studies, we propose that computer users perceiving high risks associated with WBSC will have a negative attitude towards cybersecurity in general.

Value for personalization (VFP)
-H5: High VFP will positively affect intention to adopt personalized adaptive cybersecurity Personalization is the adaptation of services or products to the needs and/or preferences of a user.Whereas adaptive systems can be built to suit a categorized group of users, personalization takes it further to a more individual level.A number of online vendors now provide personalized products and services through online profiles of their consumers (e.g.eBay, Dell, Amazon etc.).Different ML techniques are adopted in constructing these consumer profiles to facilitate the provision of personalized products and services (Izquierdo-Yusta et al. 2015;Kim et al. 2001;Raghu et al. 2001).
In marketing/e-commerce, personalization has been recognized as a significant influential factor in various consumer behavioral models (e.g.Kim et al. 2001;Xu 2006).User-specific profiles allow online vendors to relate to their customers on an individual basis, leading to improved customer satisfaction and loyalty.From the online users' point of view, however, the overall benefit of creating an online profile is the convenience of having different parts of their browsing experience personalized.Personalization can contribute to the effectiveness of technical security controls through the improvement of user interactions and experience with the system.The nature of Personalization may, however, differ for different types of user experience based on the context within which user profiles are defined and techniques used to create them.
VFP in this study refers to the level of appreciation that a user has for all types of Personalization possibilities within cyberspace.Because we recognize personalization as an important determinant of user experience and usage, it is imperative to assess its significance within the structural model of a comprehensive set of other possible determinants of cybersecurity behaviors.The assumption here is that users who generally have positive attitudes towards the different types of personalized products and services available online are more likely to accept and use personalized adaptive cybersecurity.

Attitude to personal data (APD)
-H6: APD is positively related to cybersecurity behaviors.
The construct of personal data (PD) and how it is perceived by individuals are identified in our research as critical components in explaining and predicting individuals' attitudes towards cybersecurity.Security in the digital world is often argued to be concerned with three main goals: confidentiality, integrity, and availability.The confidentiality aspect of security is a basic privacy goal and is concerned with the prevention of unauthorized access to sensitive data (Schneier 2011).Because personal data is a common factor underlying the constructs of both security and privacy (Pearson 2013), we have theorized that personal data, and how it is perceived by individuals, influences security-related behavior (Addae et al. 2016).APD here refers to the value people place on their data, and their tendency to adopt measures to protect the same.It appears that many people now recognize and accept that an increasing part of life in the digital age involves disclosure of personal data.However, this does not void the concerns that people may have about the actual use of the provided data (EU 2011).Haddadi et al. (2015) highlighted the complex nature of personal data as a construct and how users' preferences and concerns differ based on context and sociological factors.To aid the inclusion of APD in cybersecurity behavioral research models such as ours, we conducted a study that explored APD dimensions towards the development of a personal data attitudes measurement scale (Addae et al. 2017).Based on findings from this study, we hypothesize that users who are generally protective of their personal data are more likely to adopt cybersecurity measures.

Proposition set 2: the moderating effects of external factors
Moderators are variables that modify the direction or strength of relationships between independent and dependent variables in a predictive model.Moderating variables alter relationships through interaction with either endogenous or exogenous variables, or by reallocating the error terms.Moderating factors have been shown to be very significant in various technology acceptance models as they can potentially improve the predictive validity of a model under investigation (Chin et al. 2003;Venkatesh et al. 2003).Moderators may also account for inconsistent factor findings in various user technology acceptance models (Sun and Zhang 2006).Sun and Zhang (2006) examined the moderating effects in technology acceptance models and concluded that the exclusion of important moderators reflecting individual and contextual differences may account for lower explanatory power (predictive validity) and factor inconsistencies in previous findings.Accordingly, models that are extended with moderators such as gender, experience, and cultural background, are more able to capture the intricacies of complex contexts.Prior empirical studies have identified several moderating factors involving differences in context, cultural, individual, organizational, and system characteristics.In this study, external variables reflecting both individual contextual differences and system characteristics are examined.

Individual differences
The acceptance and adoption of cybersecurity technologies may vary from one individual to another depending on differences in their characteristics.Individuals differ in terms of personality, level of experience, cognitive characteristics, background, and other demographics.Various aspects of individual differences have been examined in previous research (see below).Most studies have only considered a limited number of the variables pertaining to individual differences.Thus, a need remains for a holistic approach to cybersecurity user modeling that examines the relations between various aspects of individual differences and cybersecurity-related factors.This study explores a wider variety of these individual characteristics and examines their impact on the perceived risk, usefulness, ease of use and attitude to personal data within the context of cybersecurity.As observed already, the TAM is based on the fundamental principle that user perceptions mediate the influence of all other external factors that may influence technology acceptance and usage.The taxonomy of individual difference variables from previous research (e.g.Alavi and Joachimsthaler 1992;Bostrom et al. 1990) was considered in identifying individual variables of interest that can be reliably measured alongside the cybersecurity behavioral variables in our predictive model.Consequently, individual difference variables in the model both cover the categories of demographics (age, gender, and environment) and examine the descriptive characteristics of domain knowledge (DK), self-efficacy (SE) and users' security breach concern levels (SBCL) as external variables impacting on behavioral intentions towards cybersecurity.Demographic Variables Age has been found to moderate various factors in technology adoption and usage in the workplace (Morris and Venkatesh 2000).In the area of cybersecurity, netizens between the ages of 18 and 25 were found to be more susceptible to phishing than other age groups (Kumaraguru et al. 2009;Sheng et al. 2010).The existence of gender differences in perception attributes has also been confirmed with a variety of IS diffusion models including TAM (Venkatesh and Morris 2000).Shin (2009) also examined and found significant moderating effects of demographic variables, including income, on the interactions among attitudes and behavioral factors in their unified theory of acceptance and use of technology (UTAUT) model for mobile payment.More recently, Anwar et al. (2017) observed gender differences in perceived computer security aptitudes and found that among employees from different organizations, men scored higher on self-reported cybersecurity behavior than women.The usefulness and usability of computer technologies have also been found to be dependent on several contextual factors including the technical, organizational and physical environment within which it is adopted and used (Parsons et al. 2010;Maguire 2001).Gratian et al. (2018) for instance examined the influence of personality traits on cybersecurity behavior intentions highlighting the mediating effects of environmental factors on individual differences in making security decisions.Consequently, we included three main demographic moderators (age, gender, and environment) in the study analysis to examine the moderating effects of internet users' demographics on cybersecurity behavior.The environment in our model refers to the physical location where participants in the study most often use their laptop/ desktop computers to access the internet.
-H7: User demographics of age, gender and environment will moderate the relationship between the constructs of the proposed predictive model for cybersecurity behavioral intentions.
Descriptive characteristics Security Breach Concern Level (SBCL) and Self-Efficacy (SE) are PMT constructs adapted to examine the mediating effects of a participant's protection motivation on cybersecurity behavior.In PMT, a person's protection motivation is derived from two cognitive appraisal processes-threat appraisal and coping mechanisms.Apart from PR, fear arousal (the level of concern invoked by the threat) also captures threat appraisal within PMT models.Threat susceptibility has been found to predict security intentions in a number of PMT based models used to study safety behaviors (Tsai et al. 2016).An individual's assessment of the probability and consequences of a security threat is externalized as a security concern in this study.SBCL, therefore, refers to the degree of security threat an individual feels exists towards their personal safety online.The more convinced a user is about cybersecurity threats posing significant damage to their personal digital assets, the more concerned they will be, resulting in a more positive attitude towards protection mechanisms.Hence we can assume that: -H8: High SBCL will positively influence attitude towards cybersecurity.
Several studies have examined self-efficacy by integrating it with TAM (e.g.Amin 2007;Hasan 2006;Hong et al. 2002;Ramayah 2006).For example, Chau (2001) incorporated computer attitude and self-efficacy into the original TAM as external variables affecting perceived usefulness and ease of use.Related research into security behaviors finds support for the prediction that high self-efficacy positively influences attitude towards security countermeasures (Herath and Rao 2009;LaRose et al. 2008;Milne et al. 2009;Woon et al. 2005).Self-efficacy has also been shown to influence the adoption and usage of IT (LaRose et al. 2008;Compeau et al. 1999).In this study, cyber-citizens' self-efficacy influencing and/or predicting attitude towards cybersecurity behavior is examined.The expectation is that individuals with high self-efficacy about their ability to optimize web browser security settings will have a more positive attitude towards cybersecurity than those with low self-efficacy.Therefore: -H9: High SE about WBSC will positively influence attitude towards cybersecurity.

System characteristics
System characteristics such as quality, interface design, speed/reaction time, etc., are some of the external factors proposed to have an indirect effect on the acceptance and usage of information systems (IS) through user perceptions (Davis et al. 1989;Lin and Lu 2000).For instance, Pituch and Lee (2006) included system characteristics as part of the external variables influencing e-learning use through perceived ease of use and usefulness.To do this, they solicited user ratings on three different aspects of e-learning systems-functionality, interactivity and response time.System characteristics especially functionality and interactivity were found to have the strongest total effect on the dependent variables of their model.The role of system characteristics in predicting technology acceptance through user perceptions has been explored in different contexts with a variety of system-specific features.According to Calisir et al. (2014), system characteristics such as security, reliability, and speed, as a measure of system quality, influence expectations of the user experience level, thereby increasing users' perceived ease of use.In one of the earliest studies conducted to measure user acceptance of information technology, the functional and interface characteristics of an electronic mail and a text editor were found to have a significant direct effect on attitude towards usage (Davis 1993).In our study, we identified three interface characteristics (layout, terminology, and navigation) as critical for user interaction with WBSC.Thus we argue that usability features such as clear, consistent layout and easy navigation will impact on a users' perception of WBSC, and hence the decision to accept or reject usage.
-H10: The quality of WBSC interface design will positively influence attitude towards cybersecurity.
4 The empirical study

Research design
The main research objective was to investigate influential factors which will impact on people's security behavioral intentions towards predictive analysis of a user's acceptance of personalized adaptive cybersecurity (PAC) for web browsers.A quantitative data collection and analysis approach was adopted similar to those employed by Lee and Kozar (2008); Lin (2012); Venkatesh et al. (2003); Xu et al. (2008) in predicting behavioral intention.A field survey consisting of an online measurement instrument designed to collect data regarding factors influencing cybersecurity attitude and behaviors was conducted.The survey instrument was developed and administered using Qualtrics, an online survey tool.The measures were mostly adapted from previous studies that have explored various types of determinants of technology usage and specific computer security practices.For instance, the original measurement scales of the TAM were adapted and modified to fit the context of WBSC usage.All construct measures were assessed with a 5-point Likert type scale ranging from "strongly agree" to "strongly disagree", except for the demographics and questions related to user preferences and/or experiences.Both positively and negatively worded items were included on the scales.Negatively worded items were reverse-coded during the data analysis to ensure that a higher numbered response on the Likert scale would represent a higher positive attitude score, and vice versa.
The measurement instrument developed for the cybersecurity behavioral model has four main conceptual/ theoretical components consisting of individual differences, user perceptions/ attitudes, behavioral variables, and cybersecurity personalization components.The individual differences section consists of four exogenous driver constructs (i.e., IC, DN, SE, and SBCL) as well as basic demographics such as age, gender, and environment.Thus the section measures participants' experience with web browser security (DK), self-efficacy (SE), personal preferences in terms of browser types and their respective user interfaces (IC) and their levels of concerns for security breaches (SBCL).The second part of the instrument assessed participants' general attitudes towards cybersecurity from five main user perceptions: Ease of Use, Usefulness, Risk, Personalization and Personal Data.Hence the TAM and PMT items (PU, PEOU, and PR) together with value for personalization (VFP) and attitude to personal data (APD) items represent the key determinants of the endogenous target constructs.
To minimize respondent fatigue, the APD scale adopted from Addae et al. ( 2017) was simplified by selecting only eight items based on overall cluster membership predictor importance of the APD factors as well as the reliability score of the measured items.Consequently, questions on Personal Data Awareness (PDA), Personal Data Protection (PDP) and Privacy Concerns (PC) measured reflectively, captured the major facets of the APD as a Type II second-order construct.This was to allow us to fully assess participants' attitudes to personal data in relation to cybersecurity intention and usage behavior.The third section (behavioral variables) consists of measures for the target constructs of interest (i.e., BI and ACB) and asked whether the respondents had ever used or attempted to use web browser security functionalities as well as intentions toward personalized web browser security assistance.In the final part, items adapted from Xu et al. (2008) were used to collect participants' ratings on the personalization dimensions identified for the purposes of building a Bayesian-based network model for adaptive cybersecurity.All measured items included in the survey instruments are described along with references to where they were adapted from in "Appendix A".Items were grouped into the factors represented on the research model (Fig. 1) to ensure that a complete dataset was collected for hypotheses testing and data analysis.

Data collection
A pilot test was first conducted with a mix of 50 university students and lecturers to ensure the survey instrument was comprehensible and valid.Feedback from the pilot was used to revise the final version.Convenience sampling was adopted to collect data with an online survey distributed via emails on two main university campuses in China and the UK.The survey form was also distributed online using various social media platforms including Facebook, Twitter, WeChat and LinkedIn.A total of 421 participants took part in the survey however, 37 incomplete and invalid responses had to be removed resulting in 384 usable responses.Alluding to the "ten times" rule of thumb on minimum sample size, the 384 valid responses meet the requirement for a PLS-SEM analysis.Accordingly, the 384 sample size is more than ten times the largest number of structural paths (six) directed at the most targeted construct in the model (ACB) and also more than ten times the number of indicators (six) used to measure the most complex construct in the model (APD) (Hair et al. 2011).The raw data were imported from Qualtrics and coded into the IBM SPSS statistic program for a descriptive analysis of respondent profiles.

Data analysis
The settings and goals of this research favor the use of PLS-SEM based on the criterion identified by Hair et al. (2011).Using the SmartPLS 3 software, the Structural Equation modeling (SEM) technique of Partial Least Squares (PLS) was employed to assess the theoretical model (Ringle et al. 2015).PLS-SEM has proven to be a particularly valuable approach to developing and testing models in behavioral research.The approach is particularly versatile for extending models and running complementary analyses such as nonlinear relationships and moderation alongside hierarchical component models allowing for more complex model relationships to be tested.The PLS-SEM technique also deals with data related threats such as sample size, unobserved heterogeneity and normality in the dataset, to the validity of standard predictive analytics.PLS-SEM computes parameter estimates from least square estimation thereby minimizing the demands on required assumptions about the dataset including the measurement scale for the data collection, sample size and residual distributions (Henseler et al. 2016).The PLS-SEM approach also allows for formative and multi-level constructs, making it favorable for exploring possible causal relationships while avoiding parameter estimation biases typical of regression analysis.With reference to the two-step analytical process described in Hair et al. (2011), the measurement model was first evaluated for reliability and validity as the first step.The structural theory was then verified to determine the significant levels of the hypothesized relationships at the second step.The 2-step approach ensures inferences drawn from the structural relationship are based on validated measurement scales.

Sample characteristics
Table 1 summarizes the characteristics and demographic distribution of participants.51.3% of respondents were female and 48.7% males.The majority of respondents were students (70.3%) who fell within the age group of 18-24 (62.0%).A total of 99% of the respondents were educated well above 12th grade and 72.7% earned an income of 1000-8000 US dollars per month while 27.3% earned less than $1000.

Reliability and validity of the measurement model
The outer measurement model was examined for reliability and convergent validity with the same PLS software.All variance inflation factor (VIF) values were below 5.0 which suggests multicollinearity is unlikely to be a problem in our data analysis.Following the guidelines in Hair Jr et al. (2016), VIF was further checked to determine if the first-order factors of APD were three distinct constructs.The VIF values of all constructs were below the conventional estimate of 5.0 with the highest being 3.195.Convergent validity for items in this study was assessed through their factor loadings to support the theory that sufficient convergent validity is achieved when the item measures the target latent construct.All the indicator items had significant path loadings at an alpha level of 0.01 and had high loading (> 0.5) on their respective parent constructs (Hair Jr et al. 2016;Urbach and Ahlemann 2010).All of the outer loadings in the measurement model were above the minimum recommended level of 0.708 with the exceptions of ACB_4 (0.622) and PU_3 (0.651).We retained these two items in the measurement model because they were very close to 0.70 and the criteria for reliability and convergent validity were met (Hair Jr et al. 2016).For the higher order construct (HOC) APD, all paths from the three exogenous driver constructs were meaningful (PDA = 0.20, PDP = 0.68, and PC = 0.21).All the values of composite reliability (CR) and average variance extracted (AVE) were well within the recommended threshold (Hair et al. 2010; Urbach and Ahlemann 2010), with CR ranging from 0.81 to 0.95 and AVE from 0.62 to 0.86 (Table 2).The square root values of all the AVEs shown in bold and placed diagonally in Table 3 show that discriminant validity is well established.The distinctiveness of the contents captured by the three individual first-order factors of APD is demonstrated by their correlations which are well below the 0.80 boundary for establishing discriminant validity.In summary, the results of the statistical analysis support the reliability, convergent and discriminant validity of the scales in our research model.

Structural model
The results of the structural model analysis are displayed in Fig. 2. The paths in a PLS structural model can be interpreted similarly to standardized regression betas hence the overall predictive strength of the model is assessed by the explained variance in the endogenous variables.Tests of significance of all paths were performed following the bootstrap resampling procedure outlined in Garson (2012).In the model, R 2 value indicates the total variance explained by the endogenous latent variables.R 2 values of 0.19, 0.33, or 0.67 for endogenous variables in the path model are described as weak, substantial or moderate respectively.A bootstrapping re-sampling procedure (5000 samples) was used to determine the significance of the path coefficients.Here, a multistage approach was adopted to facilitate the assessment of the APD impact on the two main endogenous variables in our extended-TAM model.The first model consisted of only the TAM and PMT Latent Variables (LV) as mediators and this explained 59% and 49% of the variances in the two target constructs BI and ACB respectively.The value for personalization (VFP) factor was included in the second stage which increased the variance explained in ACB to 64%.The effect size ( f 2 ) was assessed with the following equation: where R 2 included and R 2 excluded are the R 2 values of the dependent LV when specific independent LV are included or excluded from the model.Values ≥ 0.02, ≤ 0.15, and ≤ 0.35 for f 2 respectively, represent small, medium and large effects of the exogenous   LV (Hair Jr et al. 2016).The effect size of VFP on the endogenous construct ACB was large (0.40) and significant ( p < 0.001).Subsequently, the APD LV was added to the model and this second-order factor increased the R 2 of BI from 59 to 63%, and that of ACB from 64 to 74%.The effect size f 2 is large (0.47) and significant ( p < 0.001) for the predictive value of APD on ACB.There is also a small effect size (0.10) of APD on PCAI, which is significant at ( p < 0.005).Figure 2 provides the R 2 values for each endogenous variable in the full PLS model along with path coefficients and associated t values of the paths.To simplify the structural model and make it more legible, only paths that have significant relationships (indicated with asterisks on the path coefficient) are included in Fig. 2.However, the insignificant path from BI to ACB is included since they are the two main output variables that require further discussion.
The results (Table 4) show all five behavioral attitude determinants PEOU, PU, PR, VFP and APD, have significant effects on the behavioral intention to accept adaptive personalized cybersecurity.The five constructs together explain 63% of the variance in behavioral intention (BI).However, only three of them were found to predict actual previous adoption of cybersecurity tools as the hypothesized path from BI was not statistically significant.The relationship between PU and ACB was significant (β = −0.09,p < 0.05), but not in the predicted direction.In this study, PEOU had the highest of the five path coefficients and a significant positive relationship with BI (β = 0.84, t = 51.5, p < 0.001) while APD appears to be the most important variable in the model predicting ACB (β = 0.64, t = 12.87, p < 0.001).VFP was also found to have a significant effect on BI and ACB thereby justifying its importance in influencing users behavioral intention and attitude towards adaptive cybersecurity in a personal context.In addition to evaluating the magnitude of the R 2 values as a criterion of predictive accuracy, the model's out-of-sample predictive power (Q 2 ) values were also examined.Here, a sample re-use technique known as blindfolding, which omits part of the data matrix and uses the model estimates to predict the omitted part was applied to obtain the Q 2 values for the endogenous constructs (Hair Jr et al. 2016;Tenenhaus et al. 2005).Q 2 values greater than zero for specific reflective endogenous LV indicate the predictive relevance of the path model for that particular construct.Relative values of 0.02, 0.15 and 0.35 indicate that the model has a small, medium or large predictive relevance respectively for the specified endogenous construct.Table 5 shows that all Q 2 values are considerably greater than zero, thus providing support for the cybersecurity behavioral model's predictive relevance for all the endogenous constructs and specifically having large predictive relevance (Q 2 > 0.35) for both of our two main target constructs (BI and ACB).

Moderating effects
Further analysis was conducted to examine the moderating effects of demographic variables (Age, Gender) as well as the moderating influence of context of use (Home vs Corporate vs Public environments) on the hypothesized relationships in our model.When included in the model as control variables, age (β = 0.17, t = 2.74, p < 0.05), gender (β = 0.08, t = 2.00, p < 0.05) and environment (β = −0.23,t = 3.71, p < 0.001) were significantly associated with BI but none of these were significantly associated with ACB.The physical environment variable was negatively associated with BI, and it seems that users who more often access the internet in public places are less interested in personalized adaptive cybersecurity.Although no specific hypotheses were declared for the demographic variables of income education and ethnicity, their moderating effects were also explored in the analysis.However, since their effects were not statistically significant, they were not included in the results presented here for further analysis.PLS-SEM multi-group analysis (PLS-MGA) was conducted to deter-mine whether significant differences are present between coefficients for the observed heterogeneity (age, gender, and environment).PLS-MGA is used for comparing PLS model estimates across groups of data when the groups pre-exist (Hair Jr et al. 2016;Sarstedt et al. 2011).
To explore the moderating influence of gender, the data was split into Male (n = 184) and Female (n = 200) subgroups and separate analyses were computed for each group with the full model.Three subgroups were created for age 18-34 (n = 169), 35-44 (n = 139) and < 44 (n = 76), as well as for environment and/or context of use Corporate (n = 111), Home (n = 205) and Public (n = 68).As the maximum number of arrows pointing to an endogenous variable in our model is five, a minimum of 5 × 10 = 50 observations per group is required according to the 10-times rule.The group-specific sample sizes for the three moderating variables can, therefore, be considered to be sufficient for the PLS-MGA.Since more than two groups are being compared in the case of age and environment, the Omnibus test of group differences (OTG) approach was applied as a first step to assess whether the path coefficients are equal across the three distributions for age and environment variables.
The analysis (Table 6) yields F R values ranging from 493.35 to 23289.54 for paths between the mediating variables and the two target variables for the environment groups.F R values ranging from 686.06 to 10143.30were yielded for the age group differences on direct paths to our target variables.The null hypothesis that the path coefficients across the three groups of age and that of the environment are the same can, therefore, be rejected.Thus the test rendered all differences among the groups significant at p ≤ 0.01 suggesting at least one path coefficient differs from the remaining two across the three groups both in the case of age and environment.
Table 7 shows the differences in the path coefficient estimates of the group comparisons with respect to all the direct paths to the two DVs in the model and provides the results of multigroup comparisons based on PLS-MGA and Welch-Satterthwait (W-S) Test.While the PLS-MGA is a non-parametric test for difference of groupspecific results based on PLS-SEM bootstrapping results, the W-S is a parametric test that assumes unequal variances across groups to determine the significant difference of group-specific PLS-SEM.As a one-tailed test, a typical cut-off level of significance for PLS-MGA results is > 0.95 or < 0.05, but the cut-off level can be set to > 0.90 or < 0.10 for smaller sample sizes.Slight differences were observed between the PLS-MGA and W-S with respect to the significance of some of the group differences for specific relationships.For instance, in the comparison of the Home and Public sub-samples, the test rendered the relationship between Usefulness and behavior sig-nificant ( p ≤ 0.10) for PLS-MGA whereas this was insignificant in the W-S test ( p = 0.15).
Table 8 summarizes the PLS-MGA results into a matrix to give a more simplified visual interpretation of determining significant effects based on demographics/ moderators.The findings support the assumption that the effects of the attitudinal variables on the two target constructs may be dependent on moderating variables.The results revealed significant differences in the group-specific PLS path coefficients for the influences of the five mediating variables on ACB as well as BI on ACB.With regard to the age groups, there were significant differences between the groups for the relationship from BI to ACB, VFP to ACB, PEOU to BI, and PU to BI.In terms of gender, the relationship between BI and ACB was negative and significant (β = −0.25,t = 3.04, p < 0.05) for males but non-significant for the females.This suggests that the unexpected negative relationship between BI and ACB that was found in the full sample results (Fig. 2) seems to be largely based on the male respondents.Two other significant differences between males and females subgroups are the relationships from PEOU to BI and from PU to BI.Although the relationship between PEOU and BI is positive and highly significant ( p < 0.001) for both groups, the MGA results shows that usability is somewhat more important in determining BI among females than for males.Meanwhile, the relationship between PU and BI was positive and significant (β = 0.22, t = 2.16, p < 0.05) for males but insignificant for females.
For the Environment subgroups, there were significant differences for relationships from APD to BI, PEOU to BI, PU to ACB and PU to BI.Interestingly, the path from PU to ACB was negative and moderately significant (β = −0.31,t = 1.83, p < 0.10) for the public user group but insignificant for the corporate environment group.Thus, usefulness in not important in predicting cybersecurity usage behavior for those who mostly access the internet within a corporate environment, while most home and especially public users do not adopt cybersecurity tools, although they may think they are useful.The differences in the environment groups for the relationship from PU to BI is also worth noting.Here PU seems to be more important in predicting positive BI of the public (β = .25,t = 1.92, p < 0.10) and home (β = 0.470.03,t = 1.63, p > 0.10) user groups than for the corporate group (β = −.0.17, t = 1.73, p < 0.10).We speculate that, due to the availability of professional IT services in corporate environments, this user group members feel more secure when accessing the internet, and hence may not see the need for an easier-to-use cybersecurity mechanism.Conversly, those who mostly access the internet from non-corporate environments may have no access to cybersecurity experts, and may thus perceive personalized adaptive cybersecurity as an easier way of ensuring their security and privacy online.It should also be noted that the influence of attitude to personal data was relatively consistent across the different groups, except in the case of the home subgroup where APD did not seem to be influential in determining their BI, although it is important in predicting their actual cybersecurity usage (β = 0.47, t = 7.57, p < 0.001).Thus attitude towards personal data appears to have a strong influence on cybersecurity behavior and intentions across different user age and gender groups and for both corporate and non-corporate users.

Framework for personalized adaptive cybersecurity
Technology users differ in various ways in terms of goals, attitudes, and a host of individual characteristics and preferences that tends to influence their user experience.Design of user interaction for security and privacy technologies needs to accommodate different user goals and preferences.In the context of personal computing, web browsers provide a good platform to demonstrate the provision of adaptive and personalized cybersecurity configurations.Most current versions of web browsers allow users to sign in and synchronize their custom configurations across devices.This provides an opportunity to personalize default browser security settings as well as the presentation of alerts to improve their acceptance rate and reduce the cognitive loads associated with digital security on a personal level.User model development is fundamental in an adaptive architecture for personalizing user preferences.A user model consists of essential information and assumptions about users that can then be used to adapt the interaction of an application to specific individual users' needs.Building user models for adaptation and personalization often consists of two different approaches: one based on stereotypes and the other on personal data acquired explicitly and/or implicitly (Kuflik et al. 2012).The stereotypical user model requires research and user experimentation to identify domain based generalization and classification of user interaction behaviors into specific user profiles.The personal model, on the other hand, will adapt new interactions based on observed data from an individual user session, in addition to preferences elicited explicitly and/or inferred from stereotypical information (Ardissono et al. 2004).A personal user profile for adaptive cybersecurity, for instance, will include represented stereotype class based on socio-demographic data, estimates of user actions from observed behavior and the explicit preferences the user may have stated for security/ privacy configurations.Thus, the amalgamation of the stereotypical and explicit personal model allows personalization of adaptive cybersecurity to be based on complementing types of information sourced from cyber behavioral analytics, personal data survey, and/or machine-generated context data.
Research has shown that the cybersecurity field requires a multidisciplinary approach to identifying and translating the salient factors influencing specific privacy and security decisions into more effective user models.While findings from PLS-SEM are useful to determine these salient factors and their dependencies, a considerable amount of uncertainty remains in the attempt to recognize a user's goals from observations of behavior.A powerful modeling technique developed by the artificial intelligence and ML community for effective reasoning in conditions of uncertainty in a sound mathematical manner is Bayesian Networks (BNs) (Nadkarni and Shenoy 2004).BNs, also known as Belief Networks, provide a consistent way of replicating the essential features of plausible reasoning and have been successfully applied in the fields of medicine (e.g.Sakellaropoulos and Nikiforidis 2000), marketing (Ahn and Ezawa 1997) and business management.BNs are known to be particularly useful in handling uncertainties in user modeling for different kinds of application domains.They are typically used in situations where variables characterize the existence or absence of a quantifiable outcome.
In our study, BNs serve as an important tool to complement the user modeling process for adaptive cybersecurity.This is because the relationships between the many

Adaptive Cybersecurity Assistance
Interface Characteristics  factors influencing a user's digital security decisions are mostly unclear.The empirical study conducted has allowed us to identify these influential factors and determine the directionality of their interactions.This makes directed edges in BNs more appropriate for our model than undirected edges in Markov Random Fields (Koller et al. 2007).In the context of cybersecurity where access control policies and privacy breach regulation are major concerns, accessing real-life behavioral data for research is always a challenge.Complementing the PLS-SEM used to derive additional domain knowledge with a Bayesian-based modeling technique is, therefore, an efficient way to deal with sparse and/or incomplete data.BNs allow us to intuitively infer the hidden states of the influential factors from the PLS-SEM through observation of their interrelating effects.With Bayes' rule, the inference problem can then be formulated as a case of resolving the probability of an unknown variable from values of variables observed in the empirical study.Apart from being able to describe uncertainty with BNs, there is the added advantage of being able to integrate different types of variables and related data within a single framework and the flexibility of updating the models with new information at any given time (Fig. 3).
The components of the framework (Fig. 4) were extracted from the empirical study described in Sect. 4. Following the validation of the behavioral research model, the statistical analysis of data on the personalization dimensions proposed in Fig. 3 is used to support the construction of the Bayesian network model in our study.Nielsen and Jensen (2009) described Bayesian networks as a directed acyclic graph (DAG) consisting of a set of variables and a set of directed edges between variables.The structure is mathematically referred to as a DAG whereby variables represents events and a link from event A to B represents a causal relation whereby A is a parent of B and B is a child of A. Each variable B with parents A 1 , . . .A n has the potential Fig. 4 Bayesian Network framework to infer and provide personalized adaptive cybersecurity assistance table P(B|A 1 , . . .A n ) which holds conditional probability distributions.Consequently, the proposed Bayesian networks will yield both quantitative measures in the form of conditional probability distributions as well as qualitative relationships between the components of personalized cybersecurity.The network of relationships in the BNs highlights how the various components interact with each other to influence the decision-making process.Analyzing the personalization components of cybersecurity with a Bayesian network can help in the characterization of various interactions between user context, profile, preferences and cybersecurity behavior.To summarize, a user profile constituting personal information and observed behavior, system characteristic variables (e.g.browser type, security settings etc.), and context of use are the factors being considered for personalized or adaptive cybersecurity within web browsers.

Structuring the Bayesian-network-based model
Given the results from the empirical studies, we decided to build and assess Bayesian models that can determine a user's security/privacy needs and likelihood to adopt available cybersecurity solutions.Defining appropriate variables and states of the identified variables are the building blocks of an effective user model.We wanted to achieve quality inferences from the models by incorporating contextual information, users' actions including queries (both current and previous), as well as the users' backgrounds and personal preferences.It is important to clearly define the states of the variables included in the model so users can be monitored and the conditional probabilities assessed.To establish a database for the BN model, the impacts of attributes related to web browser security features are analyzed together with individual charac-teristics and context of use factors.Information from the survey instrument is used to describe the nodes and states of the personalization component variables and used to calculate the prior probabilities of the model.To simplify the analysis, the levels within most of the variables were reduced.For instance, the variable "location" was reclassified into three categories: home, public and corporate rather than the seven different locations measured with our survey scale (Home, School, Office, Public Transport, Cafes, Lecture rooms, and Friend's house).Time of use was also set to peak and non-peak where peak time denotes periods where the user may normally be involved with official use of the internet for work or business related goals, and non-peak for pleasure or non-business related goals.Using a BN for analysis of responses to the cybersecurity personalization survey data can uncover and characterize the interaction of the personalization components and users' cybersecurity behavior.Consequently, the output of a BN will reveal both the qualitative relationships between the attributes of personalized adaptive cybersecurity as well as the quantitative measures in the form of conditional probability distributions of the factors' dependencies and interactions.
BNs can be modeled based on prior domain knowledge and/or training datasets (Heckerman et al. 1995).Since it is not easy to acquire cybersecurity related datasets on HCUs, we complemented the available dataset we gathered from the survey with domain knowledge to obtain the best combination of nodes for the BNs.Simulated datasets may not guarantee findings that fully reflect real-world data problems but they are widely adopted to garner deep insights and train machine learning models for various application domains (e.g.Judson et al. 2008;Tsanas and Xifara 2012).Simulating aspects of network systems, for instance, has allowed researchers to overcome challenges of using data mining and ML for cyber analytics and to incorporate their intuition into building training models for intrusion detection (Buczak and Guven 2016).In this context, the cybersecurity personalization factors extracted from the data analysis along with knowledge about web browser security features are used to develop the initial BN models for security-related tasks and subtasks.A complete model can then be obtained by combining several partial models developed from domain knowledge and simulated data focusing on representative nodes.For instance, if we know a relationship exists between a user's security/privacy perceptions and expertise, these nodes can be connected by amending their conditional probability table (CPT) bounds of states accordingly.Thus conditional probability distributions (CPDs) of the formthe probability of B given A (p(B|A)), are used to encode the relationships between variables in the BN.For each node B, the likelihood that the variable will be in each possible state given its parents' node A states will be dependent on domain knowledge acquired from the empirical study (see Fig. 6) as well as the frequency observed in both the measured variables and the simulated dataset.This approach ensures a prior distribution is estimated for the model parameters and used alongside those learned from data.This will help minimize incorrectly assigned probabilities if possible combinations are not observed in the training data (Gelman et al. 2014).
For an illustration, we considered a simple scenario of inferring the likelihood that a user will welcome the automatic blocking of a third-party cookie.A reasonable deduction that can be made based on Table 9, for instance, is that there might be a 60% probability of a random user accepting to automatically block 3rd party cookies if the user is accessing the internet from home (cluster 3).However, that probability might  increase to 100% if the user is a male on a university campus (cluster 1).A change in probability can also be inferred from the observation of recent actions taken by the user on the web browser and/or based on observations of user behavior in similar contexts as shown in Fig. 6.For example, a user who previously enforced some restrictive security/ privacy options on their web browser is more likely to accept automatic blocking of cookies while completing an online form requiring sensitive information.
Prior probability can also be indicated for a user based on age and frequency of using specific security features of the web browser.Consequently, qualitative inputs in terms of the variables and their dependencies are generated by domain knowledge and expert opinions.Quantitative data are subsequently generated using data analysis and model simulation.
To identify homogeneous groups in the data set, a Two-Step clustering was adopted allowing us to automatically determine the optimal number of clusters in the data set.Respondents were first clustered based on their factor scores on three acceptability variables determined from the PLS-SEM model (VFP, PU, and BI) with k-means clustering.The results show that the majority of our participants have favorable consideration for PAC (Fig. 5).The acceptability cluster membership was then combined with other adaptive cybersecurity personalization variables (such as context/environment, gender, age etc.) for the Two-Step clustering and was evaluated on self-reported previous use of cybersecurity tool (ACB) and PEOU.The results are summarized in Table 9 and visualized with Fig. 6.
The joint probabilities are then used to specify the CPTs.To make a prediction from the BN, the model propagates the information at any given instance based on its structure and prior/conditional probabilities and provides the post-probabilities associated with the acceptability status (high or low) for a particular cybersecurity task to be adapted to the user's preference.Consequently, the BN-based decision engine will take output probabilities from both the context and user models as causal factors, together with the web browser configuration log and security task models to make a prediction.A decision status (e.g.block cookies, send an alert or not)  with an associated probability is arrived at after information is propagated in the BN.If the "acceptability" and "security need" probabilities are higher than a preset threshold, an automated security assistance in this scenario (auto-block third-party cookies or a preferred form of user alert) is provided for the user (see Fig. 7).Based on the evaluation of the level of satisfaction with the automated assistance provided, the user preference model is updated accordingly.Figure 7 illustrates a personalized cybersecurity adaptive task limited memory influence diagram (LMID) built using domain knowledge with records from the survey data analysis.
Evaluation begins with the BN built based on the proposed LMID which, we refer to as the base BN.Next, data analysis is used to populate the CPT of the base BN which is then used to generate a simulated data set.The Learning wizard in the Hugin Software (Madsen et al. 2005) 1 is then used to automatically discover a new network called intermediate BN from the simulated dataset (Fig. 8).Prior domain knowledge is then applied to resolve any uncertainties that may be present in the intermediate BN structure.With the discovered network and the generated database, parameter learning is carried out to specify a new CPT for the ensuing network called learned BN (Fig. 8).Finally, the performance results for the originally proposed BN structure are compared with corresponding BNs automatically discovered from both the survey and the simulated datasets.The comparison evaluates the model's ability to produce applicable explanations in which relationships reflects adaptive cybersecurity as a domain from which the data were generated (Shaughnessy and Livingston 2005).For prediction accuracy, we consider real usage scenarios in determining whether or not the levels of acceptability predicted are plausible.A receiver operating characteristic (ROC) curve is a fundamental measure of a model's performance for predicting specific states and the area under the ROC curve (AUC) allows the quality of the model to be expressed using a single value (Fig. 9).The analysis shows how well the predictions of the built BNs match the cases in the dataset.On the whole, the probability changes among specified scenarios for the proposed BN parameters, were similar to those obtained by the learned BN.

Discussion and conclusions
This research has two goals.One is to conduct an empirical study using a behavioral science approach to determine the factors influencing users' cybersecurity behavioral decisions.The second is to illustrate how Bayesian networks can be built by integrating findings from empirical studies into the ML approach of user and system modeling.To this end, a cybersecurity behavioral model was first introduced and empirically tested in this paper.The effects of five attitudinal constructs on cybersecurity behavioral intentions and behavior were examined and in doing so, we (1) augmented the original TAM model with additional dimensions-Perceived Risk, Value for Personalization and Attitude towards Personal Data, and (2) evaluated the influence of three sample demographic variables on cybersecurity behavioral intentions.Although not all the hypothesized paths were found to be statistically significant, some interesting findings resulted from this study.The results suggest that both security-related perceptions and general external factors contribute to individual cybersecurity adoptive behavior.The results also provide some evidence that these factors are moderated by the user's gender, age and the environment within which the internet is mostly accessed.Following the testing and verification of the behavioral model, those empirical findings were combined with the ML technique of Bayesian-network modeling for the development of a personalized adaptive cybersecurity framework.The research illustrated the model framework for personalized adaptive cybersecurity assistance.
The proposed behavioral model successfully explained most of the variance in the dataset.Similar to earlier studies (Alharbi and Drew 2014;Venkatesh and Davis 2000), TAM proved to be a useful theoretical framework to explore and explain factors influencing individuals' behavioral intentions towards technological innovations.Although the study confirmed the direct and indirect effects of some of the TAM constructs on cybersecurity behavior, some of our results are inconsistent with prior research findings and warrant further discussion.The results support prior empirical work that found a relationship between perceived ease of use, usefulness and behavioral intentions towards technological innovations (e.g. Lee 2009;Yiu et al. 2007).However, contrary to suggestions from most prior studies that perceived usefulness is the main determinant of usage intentions in other IS research contexts (e.g.Davis 1989;Gefen et al. 2003;Jeyaraj et al. 2006), our results show perceived ease of use has a greater influence in predicting behavioral intentions in the context of cybersecurity.
However, our results are consistent with some previous studies that applied the TAM to some online applications, finding a strong effect of perceived ease of use on usage intentions and behavior (e.g.Castaneda et al. 2009;Gefen and Straub 2000;Mun and Hwang 2003;Özkan et al. 2010).The original TAM theorizes that PU has a direct effect on behavioral intention while PEOU indirectly influences the intention through PU, hence depicting PEOU as a weak predictor of usage intentions.Nevertheless, our model, supports a direct effect of PEOU on behavioral intentions and usage of cybersecurity and points to a greater significance of the ease of use factor in the context of digital security.A possible explanation of this finding could be attributed to the assertion that the effect of PEOU is dependent upon whether the type of use is intrinsic or extrinsic to the technology (Gefen and Straub 2000).Thus, as our PEOU measured how easy the participant found it to learn and configure the security settings of their preferred web browser, the types of tasks involved here are intrinsic in that cybersecurity itself is an integrated component of the web browser with an interface that delivers the desired security and privacy control.Although our model did not support the influence of PEOU on PU as theorized in the original TAM, PU did have a substantial impact on behavioral intention, which is consistent with extant findings in the TAM literature.The results confirms the direct relationship between PU and behavioral intention, though PEOU did not have a significant effect on PU and the proportion of the BI variance accounted for by PEOU far outweighed that of PU in our cybersecurity behavioral model.Furthermore, PEOU is a significant determinant of self-reported actual cybersecurity practiced in this study, while PU is a non-significant determinant.Therefore, PEOU provides considerable explanatory power in the context of cybersecurity usage among home computer users.
Another major conclusion from this study that differs from the classical TAMrelated studies is the role of behavioral intention.Based on findings from previous behavioral models, we had originally hypothesized that behavioral intentions will predict actual self-reported adoption of cybersecurity mechanisms.However, contrary to what is suggested by the extant literature, our dataset did not support this hypothesis, and upon further review, we realized this finding is reasonable in our specific research context.This is because our behavioral intention construct focused on personalized adaptive cybersecurity (PAC) rather than general cybersecurity, and hence participants may not yet have been exposed to it.Moreover, in the context of cybersecurity, it is generally logical to expect the inherent inexplicability of security to impede actual usage though users may have intended to adopt available countermeasures.Thus factors such as complexity, inexperience and the secondary nature of security configuration to web browsing, in general, tend to deter adoption and usage of cybersecurity tools.Our findings, however, highlight the moderating role of gender as the effect of BI on actual self-reported usage was significant for males but not for females although the relationship was negative.Moreover, the effect of PEOU on BI was much stronger for the female subgroup, indicating that female netizens may be more hesitant to adopt difficult-to-use cybersecurity controls.
The results also suggest that the strongest predictor of self-reported actual usage of cybersecurity controls is the second order construct of attitude towards personal data.Thus, participants who showed higher concern for the collection and use of their personal data were more likely to have attempted to or actually adopted a cybersecurity countermeasure to ensure their privacy/security online.Interestingly, the relationship between the APD construct and BI to adopt personalized adaptive cybersecurity was negative, indicating that users who are very privacy conscious are less likely to adopt cybersecurity mechanisms that rely on their personal data to provide personalization.The relevance of the proposed BN framework is clearly supported by these findings.The BN-based models complement available machine-generated data (e.g.location, time, web logs, etc.) with domain knowledge data for the design of an intelligent cybersecurity mechanism.This minimizes the need to actively mine personal data to support the prediction of acceptance of intended security task to be automated.The BN can also learn from real usage experience data to automatically update the probabilities when the inherent adaptability function is executed in practice.Users will be more satisfied if automated cybersecurity assistance provided is relevant to their primary cyber goals and delivered in a manner acceptable to them based on appropriate factors influencing their personal preferences.This requires a complex decision-making process involving predictive analysis of system and usage behavior with a host of uncertainties.Building the predictive model with a BN which has the inherent facility to handle uncertainties will ensure a more effective provision of automated assistance that meets differing users' preferences compared to random automation of security tasks.

Implications for theory and practice
This study has implications for both researchers and practitioners of cybersecurity.From a research perspective, the extension of the TAM explained a significant amount of the variance in behavioral intention and adoption of web browser security controls.The study validates the significant role of user perceptions of ease of use, usefulness, risk, and personalization in predicting individual's intention to adopt PAC to achieve their security and privacy goals while accessing resources in the cyber world with their web browsers.As discussed, the ease of use factor which is known to have a weaker influence in the classic TAM literature takes on a much more significant role when it comes to cybersecurity control usage and intentions.This implies that individuals who normally disregard cybersecurity countermeasures may have the intention of adopting PAC if they realize that it will be useful and easy to do so.The study introduced additional constructs from protection motivation theory and personal data research that better reflect the complex context of cybersecurity which encompasses digital security and privacy in its entirety.The findings from the PLS-SEM generally support the importance of the additional constructs, especially attitude towards personal data in predicting adoption behavior in the domain of cybersecurity.Consequently, the findings from the empirical behavioral study provide theoretical contributions in the area of cybersecurity acceptance and usage.This is with respects to both re-validation and extension of an old theoretical framework as applied to the new context of security behavior modeling.The findings from this research, therefore, add substantially to our understanding of cybersecurity behavioral intentions and personalization dimensions.
The findings also have implications for practice and design as it can inform several aspects of improving the usability of cybersecurity mechanisms.This study suggests that cybersecurity mechanisms targeted at HCUs need to be very usable with minimal demand on cognitive resources.The study also endorses the value of incorporating data and privacy protection into system design right from the onset, which are the underlining principles of recent privacy-by-design projects.For instances both the new EU GDPR and PRIPARE projects (Notario et al. 2015;Huth 2017) highlight the need for privacy-by-design.However, almost no direct comprehensive studies exist on non-expert users' privacy preferences towards adaptive cybersecurity in noncorporate environments.Our proposed predictive model for providing personalization takes on individuals' disposition to their personal data into account.This provides a framework for incorporating data privacy controls from the design stage.In so doing, personalization is provided at the preferred level for each individual.Thus, our design framework will facilitate the process of determining and limiting access to such data that a user might consider too sensitive in providing adaptive cybersecurity.
In summary, the contributions of the research presented in this paper are both novel and significant paving the way for further empirical study on personalized adaptive cybersecurity in the public domain.As research exploring the provision of PAC for HCUs is still in its infancy, the issues discussed in this paper fill a fundamental gap in the current literature.The empirical approach of PLS-SEM has been used to explore the statistical relationship between various cybersecurity behavioral input variables to predict two output variables (BI and ACB).This provided essential insights into the specific issue of predicting user behavioral intentions toward the provision of PAC assistance.An example BN-based framework is developed to illustrate how these insights can be incorporated into building PAC user models.The BN is thus built using a range of diverse input variables including behavioral, context and simulated web browser features to demonstrate the provision of PAC in web browsers.

Limitations and future research
It is important to highlight the limitations of the studies presented in this paper.Notably, generalization will need to be done with caution as the university students and staff were used as a convenient sample.The data set has however been successfully used to provide empirical evidence for the usefulness of the predictive analysis of users' behavioral data to the design of adaptive cybersecurity.Although we had some measured data sets (such as self-efficacy), observation data such as the actual level of users' cybersecurity expertise and the security state of the browsers, were not available during the development of the BN-based models.The data used for the BNs were thus obtained by simulating aspects of the proposed framework.Therefore, the possibility that the results may have some bias cannot be overlooked.Nevertheless, the simulated data provided a good indication of the likely percentage change to determine the underlying trend that may be present in real-world data scenarios.Moreover, the primary goal of this work is to demonstrate the incorporation of insights gained from behavioral empirical studies into training machine learning models that can better support prediction and decision-making in the domain of cybersecurity.This work represents a first step towards the design and development of a user-friendly adaptive cybersecurity system which adheres to the concept of privacy-by-design.We also recognize that further research is needed to fully evaluate the proposed BN-based models.This will require additional dataset and further optimization and testing before implementation.Although the preliminary results using simulated data are promising, no real trial data was available for full validation.However, since our goal is to illustrate the feasibility of the approach rather than validate the same, we sought to evaluate the model on prediction accuracy.Thus considering real usage scenarios, we are able to determine the underlying trends in predicting acceptability and usability with the set of parameters identified.
Continuing with our combined approach of empirical studies and modeling techniques, we have determined three future research directions.First, additional broader samples are required to replicate the behavioral model and validate any inferences that can be made based on either a PLS or on Covariance-based SEM results.Secondly, other factors that will influence cybersecurity personalization need to be considered and their appropriate measures determined so they can be incorporated into the Bayesian network system.Finally, the Bayesian-based models need to be implemented in a prototype web browser for practical evaluation of the function and further optimization with real browser session logs and sensory data.

Individual Differences-Descriptive Charateristics
SE and SBCL are PMT constructs used to examine the mediating effects of participant's protection motivation on cybersecurity behaviors.The set of questions here are used to examine users level of experience with their preferred web browser as well as exposure to web browser security issues and protection motivation levels (Chang 2004;Ng and Rahim 2005).SE items are adapted from the instrument developed and empirically validated by (Compeau et al. 1999) while SBCL items are adapted from (Herath and Rao 2009).
Self-Efficacy (SE) I could optimize my web browser security settings . . .SE_1: . . .if I had only the web browser manuals for reference.SE_2: . . .if I had seen someone else doing it before trying it myself (Reverse Coded) SE_3: . . .if there was no one around to tell me what to do as I go Security Breach Concern Level (SBCL) SBCL_1: Cybersecurity issues affects me directly SBCL_2: Cybersecurity threats are exaggerated (Reverse Coded) SBCL_3: I think cybersecurity issues should be taken seriously SBCL_4: Security breaches are only targeted at organizations (Reverse Coded) System Characteristics (SC)-SC assesses participants view on the user-friendliness of their preferred web browser and are measured using items from (Thong et al. 2002(Thong et al. , 2004)).The construct is used to elicit individual preferences in terms of the Design, Terminology/ Language and Navigation of the browser security interface/ user interactions with the following items: IC_1: I understand the terms used on my preferred browser security interfaces IC_2: Layout of the browser security interface is clear and consistent IC_3: The sequence of screens for security settings are difficult to navigate (Reverse Coded) IC_4: Security functions are well depicted by buttons and symbols

Part 2 (A)-User Perceptions (TAM & PMT)
Perceived Ease of Use (PEOU)-is "the degree to which an individual believes that using a particular system would be free of physical and mental effort (Davis 1989)."Likert type statements were adapted from previously validated measurement inventory of TAM variables and rephrased for web browser security settings (Davis et al. 1989;Lu et al. 2014;Thong et al. 2002;Venkatesh and Davis 2000).PEOU_1: Learning to configure a browser security settings is easy for me PEOU_2: Interacting with the interface for web browser security settings does not require a lot of my mental effort PEOU_3: My interaction with web browser security settings is clear and understandable PEOU_4: I find it easy to optimize my web browser security to the level of protection I want for my computer and privacy Perceived Usefulness (PU)-which is also adapted from TAM's scale items is the degree to which a person believes web browser security settings would improve their protection against cyber-attacks (Davis 1989).PU_1: Web browser security functionalities gives me greater control over my safety and privacy online PU_2: Overall, I find browser security settings useful in protecting my computer from cyber attacks PU_3: Optimising my browser security settings gives me peace of mind when I am working with the internet PU_4: The sensitive nature of information I search for and/or store on my personal computer requires me to optimize my web browser security settings Perceived Risk (PR)-Questionnaire items for perceived risk was adapted from (Lu et al. 2005).Their research findings indicate that perceived risk indirectly impacts intentions to use an online application under security threats.PR_1: Security functionalities embedded in web browsers are not adequate for preventing cyber attacks PR_2: It is important to optimize browser security when visiting sites that requires data input PR_3: I can make mistake whiles configuring my browser settings which can cause damage to my computer Value for Personalization (VFP)-in this study VFP refers to the level of appreciation that a user has for all types of personalization possibilities within cyberspace.Items were adapted from the value of online personalisation scale developed and validated by Chellappa and Sin (2005).VFP_1: I value online applications that are personalized based on information that is collected automatically (such as IP address, pages viewed, access time) but cannot identify me as an individual.VFP_2: I value products and services that are personalized on information that I have voluntarily given out (such as age range, salary range, Zip Code) but cannot identify me as an individual.VFP_3: I value application interfaces that are personalized for the device (e.g.desktop, mobile phone, tablet, etc.), browser (e.g.Internet explorer, Chrome, Firefox, etc.) and operating system (e.g.Windows, Unix) that I use.

Part 2 (B)-Attitude to Personal Data (APD)
To minimize survey fatigue, the APD scale adopted from (Addae et al. 2017) is simplified based overall cluster membership predictor importance of the APD factors as well as reliability score of the measured items.Privacy Concern PRI_1: I am sensitive about giving out information regarding my preferences PRI_2: I am concerned about anonymous information (information collected automatically but cannot be used to identify me, such as my computer, network information, operating system, etc.) that is collected about me.

Part 3-Cybersecurity behavioral Intentions
Personalized Cybersecurity Adoption Intention (BI)-Items used to examine participants' general attitude to personalized adaptive web browser security are adapted from (Lu et al. 2014;Ng and Rahim 2005).BI_1: I am likely to accept personalized browser security update notification BI_2: It is possible that I will allow adjustments to my web browser security settings to improve my safety online BI_3: I am certain that I will pay attention to cybersecurity alerts tailored to my personal preference Actual Cybersecurity behavior (ACB)-Items determining user interaction with web browser security settings were selected and adapted from the list of strategies people adopt to protect themselves online identified by (Rainie et al. 2013).ACB_1: I have used service that allows me to browse the web anonymously ACB_2: I don't set my browser to disable or turn off cookies (Reverse Coded) ACB_3: I regularly clear cookies and browser history while I use the internet ACB_4: I sometimes encrypt my communications while using the internet

Part 4-Components of personalization
Items were adapted from (Xu et al. 2008) to acquire participants' ratings of the personalization dimensions identified for the purposes of building a BN-based model for adaptive cybersecurity.

Fig. 5
Fig. 5 Cluster groups based on acceptability factors

Fig. 7
Fig. 7 The qualitative representation of the LMID used for decision making in PAC with priors based on data analysis

Fig. 8
Fig. 8 The intermediate structure and CPT estimates for the Learned BN Protection PDP_1: I regularly look out for new policies on personal data protection PDP_2: I consider the privacy policy of institutions where I give out such personal details PDP_3: I don't always optimize my privacy settings when I create an online profile (Reverse Coded) Awareness PDA_1: Such details about me are of value to external organizations PDA_2: Researchers don't need my consent to access my personal details (Reverse Coded) PDA_3: Data collection organizations need to disclose the way the data are collected processed and used.
indicate the importance of the following user interface characteristics to be considered in personalizing your web browser security and privacy settings: a Language b Presentation style (popup, icon change etc.) c Navigation style (buttons, drop down etc.) d Level of Information (Detailed vs. simplified) e Others (please specify) Adaptive Cybersecurity 2. Please indicate the importance of the following characteristics of an adaptive cybersecurity to be considered in personalizing your web browser security and privacy settings.a User Effort Required b Benefit of the security configuration c Cost of the automated configuration d Others (please specify) Context 3. Please indicate the importance of the following contextual factors , which should be taken into consideration in personalizing your web browser security and privacy settings.indicate the importance of the following user actions, which should be taken into consideration in personalizing your web browser security and privacy settings.a Active Browsing session b Browser History c Explicit security/privacy queries d Previous acceptance of personalized cybersecurity e Others (please specify)

Table 6
Results of OTG for age and environment

Table 9
Cluster distribution of respondents showing cluster centers sorted by overall cluster membership predictor importance