Anger and hostility: are they different? An analytical exploration of facial-expressive differences, and physiological and facial-emotional responses

ABSTRACT Previous research has proposed the exploratory hypotheses that hostility could differ from anger in the sense that it involves higher possibility for inflicting physical harm while anger could involve higher frustration and stress compared to hostility. Based on these hypotheses we tested whether there are expressive differences and discrete emotional responses between angry and hostile faces. We used participant assessment to preselect faces. We found that using action unit analysis, faces labelled as angry and hostile revealed differences in expressive characteristics and that hostile faces were – as predicted – rated by the participants higher for the intent to inflict physical harm. Subsequently, we presented these faces, as well as fearful, sad and neutral faces, overtly and using masking and measured skin-conductance, heart-rate and facial-emotional responses. We found that in both conditions faces expressing hostility led to higher physiological arousal. Detection of a face was a necessary condition for physiological responses to angry and hostile expressions when faces were presented using masking. We found that during overt presentations, hostility elicited fearful facial-emotional responses while anger elicited mirroring responses. Our findings suggest that hostility is a fear-eliciting emotion related to anger with distinguishable expressive characteristics.

Anger and hostility are two widely studied concepts in psychology (Birkley & Eckhardt, 2015). Anger is one of the six basic emotions that characterise human facial expressions (Ekman, 1992). Hostility is considered an "emotional state" (Averill, 2012;Fernandez, Day, & Boyle, 2015) and has been mentioned as a concept in multiple and diverse studies ranging from international politics to forensic and medical research (Lin et al., 2015). It is not surprising, therefore, that anger and hostility have been approached using a variety of different perspectives. They have been explored in relation to whether they portray the intent for physical harm or verbal aggression (Yudofsky, Silver, Jackson, Endicott, & Williams, 1986). They have been addressed as social norms, such as situationally expected responses (Averill, 2012), as personality attitudes, such as instrumental-premeditated and impulsive-affective traits and as evolutionary concepts, such as defence and predatory mechanisms (Weinshenker & Siegel, 2002).
Anger is an emotion that has been described as state-anger, the transient experience of irritation, stress and frustration in response to an emotional elicitor, and as trait-anger, a more constant personality temperament that makes an individual experience more frequent and more intense state-anger responses even to innocuous and unprovocative cues (Ramirez & Andreu, 2006). Hostility, on the other hand, is considered an emotional state or emotional expression that communicates the intention to overtly or covertly harm an individual and includes aggressive motor responses, as well as expressive characteristics that indicate potential intent for physical aggression, and assault, such as physically attacking an individual (Deffenbacher, 2000). Due to the intent to physically harm an individual portrayed in hostile expressions, faces expressing hostility could potentially elicit higher peripheral nervous system arousal (see Supplemental Material 5.1) and different facial-emotional responses (Houston & Stanford, 2005) compared to faces expressing anger (Fernandez et al., 2015).
Despite the multi-faceted aspects of anger and hostility-related models in psychological theory, the neural correlates (Brooks et al., 2012;Fusar-Poli et al., 2009) and the physiological correlates (Ottaviani et al., 2016;van der Ploeg, Brosschot, Versluis, & Verkuil, 2017) that characterise the two expressions have been researched under the umbrella term of anger, and previous psychological research has not attempted to empirically explore possible differences between anger and hostility (see Supplemental Material 5.2). For example, studies assessing responses to emotional faces (Banks, Bellerose, Douglas, & Jones-Gotman, 2012) have focused on the exploration of skin conductance and heart-rate responses to anger as a basic emotion (Ekman, 1992). In the same manner, studies relating to masked presentations and physiological responses (van der Ploeg et al., 2017), have included prototypical angry faces (Bornemann, Winkielman, & van der Meer, 2012;Chatelain & Gendolla, 2015) from a variety of pre-existing datasets (Axelrod, Bar, & Rees, 2015) but have not explored physiological and facial-emotional responses to perceived hostility. Furthermore, anger and hostility are often used interchangeably in topical research and previous conceptual approaches have not endeavoured the provision of a comprehensive theoretical framework for exploring a possible distinction between anger and hostility as emotional expressions (Fernandez et al., 2015).
A distinguishing theoretical account and empirical definition that could contribute as to whether and why these facial expressions could be addressed as separate concepts is missing (Lemerise & Dodge, 2008). The possibility that hostility and anger could elicit different responses is also not addressed in previous research, and an explicit account associated with the exploration of the context in which these expressions could manifest and the social dynamics, behavioural motivations, and emotional and cognitive processes that could underlie these expressions is not explored by previous psychological studies (Eckhardt et al., 2004;Fernandez et al., 2015; see also Supplementary Material 5.3).
This omission could be problematic also for theories relating to biological preparedness. Biological preparedness refers in this context to central and peripheral nervous system responses to the perception danger (Brooks et al., 2012). For example, several relevant studies explore whether we can experience automatic and involuntary arousal in response to environmental signals of danger using angry faces (van der Ploeg et al., 2017). In these studies, skin conductance responses (SCR), subcutaneous sweating, and changes in heart-rate (HR) are most commonly assessed because these are reliable measures of peripheral nervous system arousal and automatic and involuntary processing of danger (see also Supplemental Material 5.1). It is unclear, nonetheless, if anger is an appropriate stimulus for eliciting responses to danger (Hess & Fischer, 2013 ) or whether evolutionary-biological preparedness should be explored using stimuli that could indicate higher expressive intent for physical harm, such as faces labelled as hostile (Averill, 2012). It is also unclear if the current novel attempt to distinguish between anger and hostility can be extended to responses to masked faces, such as subliminal faces expressing anger and hostility, and whether hostility can be processed due to its biological importance without awareness. Finally, if we keep in mind that hostility could differ from anger in the sense that it portrays expressive characteristics that indicate the intent to inflict physical harm, providing empirical evidence that could support that there are distinguishing characteristics between these two expressions could function as a starting point for developing response strategies that could inform and improve the professional practise of police, orderly and medical personnel (Novaco, 1994;Rippon, 2000).
The exploratory hypothesis that guided the current research was that hostile expressions could indicate higher intent for physical harm compared to angry expressions. To explore this hypothesis, we tested, in the current studies, whether angry and hostile expressions elicit differential patterns of skin conductance and heart-rate responses, as well as whether they are associated with different facial-emotional characteristics and elicit different facial-emotional responses. Due to the lack of a dataset including separate labels for angry and hostile expressions, we selected and assessed, using participant testing and facial recognition software analysis, angry and hostile faces from an existing database in which faces were labelled as angry (Gur et al., 2002). Subsequently, we presented these faces, as well as fearful, sad and neutral faces overtly for one second and for brief durations (34.72 ms) with backward masking to a black and white pattern (125 ms), and measured skin conductance, heart rate and facialemotional changes. The aim of the current studies was to explore possible differences between anger and hostility and also to test with rigorous methodological and statistical criteria whether hostile faces can be processed without awareness.

Phase one
Aims: The aims of this phase of stage one were to preselect faces expressing anger or hostility from an existing dataset and assess with automatic facial recognition software whether these faces display differences in action units for the expression of emotion. Our hypothesis for this phase was that due to the possibility that hostile faces indicate higher intent for physical harm we would be able to record distinct expressive differences between faces labelled as angry and faces labelled as hostile.
Participants: A power calculation based on medium effect sizes (partial eta-squared = .06; f = .25) and within-subject experimental trial repetitions was performed. The result revealed that twenty-five participants would be required for P (1β) ≥ .8 (Faul, Erdfelder, Buchner, & Lang, 2009). Twenty-seven volunteers participated in this phase. The exclusion criteria were current or previous DSM Axis I or II diagnosis (Dalili, Penton-Voak, Harmer, & Munafò, 2015), having a criminal record (Roberton, Daffern, & Bucks, 2015) and suffering or having suffered abuse in their personal life (Pollak, Cicchetti, Hornung, & Reed, 2000) through self-report. The participants were screened with the Somatic and Psychological Health Report Questionnaire (Berryman, McAuley, & Moseley, 2012), an on-line alexithymia questionnaire (Alexithymia, 2018) and the Aggression Questionnaire (McKay, Perry, & Harvey, 2016). One participant was excluded from the analysis due to possible alexithymic traits. The final population sample consisted of twenty-six participants (thirteen female) with mean age 31.08 (SD = 7.19). The experiment was approved by the Ethics Committee of the School of Psychology of the University of Nottingham.
Facial stimuli: The facial stimuli used were taken from the dataset created by Gur et al. (2002; see also Supplemental Material 3). All the stimuli were adjusted for interpupillary distance, transformed to grey scale and resized to a standard 1024 × 768 pixels resolution. Their luminescence was averaged in SHINE, MATLAB Toolbox and finally the faces were spatially aligned and framed into pure white within a cropped circle (Height: 6 cm, Width: 4 cm).
Participant assessment: All stimuli were presented on a high frequency LED monitor set at 144 Hz (6.94 ms) and the presentation was created in the Builder and Coder components of PsychoPy v.1.90.02 (Peirce, 2007). Two-hundred faces labelled as angry from fifty actors (twenty-five female) were presented. The session started with a training stage during which participants familiarised themselves with the keyboard and mouse response components of the experiment. The main experiment started with a fixation cross for two seconds (±one second). After the fixation cross, in random order, a single face was presented at fixation for one second followed by a black-and-white pattern mask for one second. A blank screen interval was then presented for two seconds. After the interval participants were asked "What label best describes the presented face?". They were asked to choose from an on-screen list using the keyboard. The options included (a) angry, (s) hostile, (d) both and (f) none; the key assignment and the order of the list was randomised in each trial and the participants were briefed during the training session that they can base their responses on subjective emotional criteria. After the labelling response participants were asked to use the mouse to rate from one (extremely low) to ten (extremely high) the intensity of the emotional expression and press OK to confirm their choice. Following this response, the participants were asked to use the mouse to rate from one (extremely low) to ten (extremely high) the emotional expression in four Likert scales, each presented in one quadrant of the screen. The emotional expression was rated for frustration, stress, anger and hostility (Ramirez & Andreu, 2006); the assignment of scales to quadrants was randomised on each trial. Participants were asked to press OK to confirm their choice. A three-second blank screen was presented before the next trial.
Stimulus preselection: All faces which participants rated with 100% agreement for expressing anger or hostility were initially selected. The set included fortyone faces from twenty-seven actors. These faces were further selected to avoid actor repetition and to include an equal number of males and female. To this end, ten faces showing anger and ten faces showing hostility from twenty different actors (including five different female and male actors for each category) were chosen using a pseudo-randomisation function implemented in Python.
Facial recognition software: Computer-based analysis of the resulting pool of images was conducted using Noldus FaceReader 7.1. The analysis employed the Viola-Jones cascaded algorithm and an active appearance model (AAM) to eliminate static identification variability. The analysis included the in-built emotional categorisation labels included in Noldus (anger, fear, surprise, happiness, sadness and neutral), an assessment of whether facial action units were innervated and a percentage metric for the identification of facial action unit innervation that indicated how pronounced the specific action unit was in the assessed face.
Results and Discussion: The analysis was performed using frequentist and Bayesian statistics. For every non-significant finding, a Bayes factor was calculated using the Dienes Calculator. Sensitivity for the null was defined at B ≤ .33, given that the mean for hostility, for any reported assessment, was within ± one standard deviations of the mean of the expression it was compared against (Dienes, 2016). The final subset of faces did not reveal significant differences in Emotional Intensity ratings between hostility (M = 8.23; SD = .74) and anger (M = 8.16; SD = .8; p = .74; d = .1; S.E. = .14; B = .85). To explore whether the final subset of angry and hostile faces included substantial differences in emotion characteristics, the selected stimuli were tested for differences in frustration, stress, anger and hostility ratings. A repeated measures ANOVA revealed that there were significant differences between faces expressing anger and hostility (F (1, 25) = 4.65; p = .041; η 2 = .16). Bonferroni corrected pairwise comparisons revealed that participants rated faces expressing anger (M = 8.27, SD = 1.05) higher for stress compared to faces expressing hostility (M = 7.36, SD = .84; p < .001; d = .96). For faces expressing anger (M = 8.19, SD = 1.39) we also found a trend for higher frustration ratings compared to faces expressing hostility (M = 7.63, SD = .76; p = .08; d = .49). Hostile expressions (M = 8.91, SD = .64) were rated higher for hostility compared to angry expressions (M = 6.35, SD = 1.14; p < .001; d = 2.77). Ratings for anger were not significantly different between faces expressing anger (M = 8.1, SD = 1.26) and faces expressing hostility (M = 8.29, SD = 1.21; p = .59; d = .08), and this comparison also revealed a trend for Bayesian sensitivity for the null (S.E. = .33; B = .45). The computer-based assessment identified all images as expressing anger (as opposed to fear, surprise, happiness, sadness or being neutral). Further quantitative analysis for the percentage metric for the identification of facial action unit innervation differences between anger and hostility revealed significant differences (F(1, 9) = 16.11; p < .01; η 2 = .64) and showed that hostility included more pronounced head and gaze participant-oriented characteristics (see Table 1; see Supplemental Material 3: Stimulus Set Selection for actor gender analysis). Notes: Percentage metric for the identification of facial action unit innervation between hostility and anger. One asterisk (*) indicates Bonferroni-corrected significance between faces expressing anger and hostility at p ≤ .01. Two asterisks (**) indicate Bonferroni-corrected significance between faces expressing anger and hostility at p ≤ .001.

Phase two
Aims: The aim of this phase of stage one was to rate and compare the final subset of angry and hostile expressions for the intent to physically harm an individual. Our hypothesis for this phase was that the expressive differences between faces labelled as angry and faces labelled as hostile in the previous phase could be partly due to expressive characteristics in hostile faces that could associate with higher intent for physical harm and that hostile faces will be associated with higher ratings for the intent to inflict physical harm.
Participants: A power calculation based on medium effect sizes (d = .5) and within-subject experimental trial repetitions was performed. The result revealed that twenty-seven participants would be required for P (1-β) ≥ .8 (Faul et al., 2009). Forty-three volunteers who were not part of Phase One participated in this phase. The exclusion criteria were current or previous DSM Axis I or II diagnosis, having a criminal record and suffering or having suffered abuse in their personal life through self-report. The participants were screened with the Somatic and Psychological Health Report Questionnaire, an on-line alexithymia questionnaire and the Aggression Questionnaire. Data from one participant were excluded from the analysis due to a possible mental health diagnosis (SPHERE-12). The final population sample consisted of forty-two participants (twenty female) with mean age 29.28 (SD = 4.29). The experiment was approved by the Ethics Committee of the School of Psychology of the University of Nottingham.
Methods: All stimuli were presented on a high frequency LED monitor set at 144 Hz (6.94 ms) and the presentation was created in the Builder and Coder components of PsychoPy v.1.90.02. Twenty faces from twenty actors were presented. Ten faces that were labelled as angry and ten faces that were labelled as hostile in the previous phase were presented. The session started with a training stage during which participants familiarised themselves with the keyboard and mouse response components of the experiment. The main experiment started with a fixation cross for two seconds (± one second). After the fixation cross, in random order, a single face was presented at fixation for one second followed by a black-and-white pattern mask for one second. A blank screen interval was then presented for two seconds. After the interval participants were asked "Please rate how likely you consider this expression to indicate intentions to physically harm an individual?". Participants were asked to use the mouse to rate from one (extremely low) to ten (extremely high) each expression and press OK to confirm their choice. A three-second blank screen was presented before the next trial.
Results and discussion: Hostile faces (M = 8.31. SD = .62) were rated higher for the likelihood to inflict physical harm compared to angry faces (M = 7.75, SD = .49; t (41) = 5.49, p < .001; d = 1.01) suggesting that hostile expressions included more pronounced physical-threat-related characteristics compared to angry faces. Although previous research has suggested that male actors are perceived as conveying more intense anger-related emotions compared to female actors Hess, Adams, Grammer, & Kleck, 2009), no differences were found between male and female actors for the ratings related to the intent to inflict physical harm (t (41) = 1.03, p = .31; d = .21; S.E. = .33; B = 1.02) in the current phase.

Stage two: emotional presentation
Aims: The aim of this stage was to present angry, hostile, fearful, sad and neutral expressions for one second and measure skin-conductance, heart-rate and facial-emotional responses. Our hypothesis for this stage was that, due to expressive characteristics in hostile faces that indicate higher intent for physical harm, hostile faces will elicit higher physiological arousal than angry faces and that they will elicit fearful facial-emotional responses. As an exploratory hypothesis we expected angry faces to be processed as communicating social cues that relate to frustration and experienced stress and elicit, mirroring, angerrelated facial emotional responses.
Participants: A power calculation based on medium effect sizes (partial eta-squared = .06; f = .25) and within-subject experimental trial repetitions was performed. The result revealed that twenty-eight participants would be required for P (1-β) ≥ .8 (Faul et al., 2009). Twenty-nine volunteers who were not part of Stage One participated in the current stage. The exclusion criteria were current or previous DSM Axis I or II diagnosis, having a criminal record and suffering or having suffered abuse in their personal life through self-report. The participants were screened with the Somatic and Psychological Health Report Questionnaire, an on-line alexithymia questionnaire and the Aggression Questionnaire. No participants were excluded. The final population sample consisted of twenty-nine (thirteen female) participants with mean age 31.82 (SD = 8.25). The experiment was approved by the Ethics Committee of the School of Psychology of the University of Nottingham.
Facial stimuli: The facial stimuli used were taken from the dataset created by Gur et al. (2002). Ten different faces from different actors expressing anger, hostility, fear, sadness and neutral emotions were used. Fifty non-facial blurs were also used. These were generated from black and white pattern stimuli and scrambled using pseudo-randomised pixel permutation in MATLAB. All stimuli were adjusted for interpupillary distance, transformed to grey scale and resized to a standard 1024 × 768 pixels resolution. Their luminescence was averaged in SHINE, MATLAB Toolbox and finally the faces were spatially aligned and framed into pure white within a cropped circle (Height: 6 cm, Width: 4 cm). The included stimuli were validated for emotional discrimination with face-reader software (Noldus, 2018) and participant assessment, they were controlled for lowlevel visual features, such as spatial frequency and gradient orientation differences, and the black and white pattern blurs were also separately adjusted for luminance contrast with the presented faces (see Tsikandilakis, Bali, & Chapman, 2019;Tsikandilakis, Chapman, & Peirce, 2018).
Physiological assessment: Skin conductance and heart rate were used to assess physiological responses. Skin-conductance responses were measured from the left hand (index/first and middle/second fingers) of each participant using disposable Ag/AgCl gelled electrodes. The signals were received by a BIOPAC System, EDA100C in units of microsiemens (μS) and recorded in AcqKnowledge (Braithwaite, Watson, Jones, & Rowe, 2013). Heart rate was measured via a single finger sensor from the left hand (ring/third finger). The signal was measured by a BIOPAC System, PPG100C using infra-red photoplethysmogramy of blood flow fluctuations and converted and recorded in beats per minute (bpm) in AcqKnowledge. The presence of a phasic skin-conductance response was defined as an unambiguous increase occurring up to three seconds post stimuli offset (van der Ploeg et al., 2017). The presence of a heart-rate response was defined as an eventrelated heart-rate peak in beats per minute occurring up to five seconds post stimuli offset. Each score was calculated using the inbuilt derive phasic from tonic and find cycles routines as the highest peak in physiological responses (δ) in respect to a tonic baseline averaged across a period (δT) of one second for each pre-stimulus onset using parallel port-input derived pre-stimulus onset markers (Braithwaite et al., 2013, p. 23).
Facial recognition software: Computer-based analysis of the resulting pool of images was conducted using Noldus FaceReader 7.1 using an HD camera mounted on the bottom of the presenting screen and centred on the participant's face. The analysis was run using the maximum video capture frames per second allowed by the face-reader equipment (thirty fps). The analysis was run using the Viola-Jones cascaded algorithm and an active appearance model (AAM) that employed a 500point Euclidean transformation to eliminate static identification variability for image quality, lighting, background variation and orientation (Lewinski, den Uyl, & Butler, 2014). Each participant was evaluated in respect to the expressed emotion after controlling for the influence of action units that were present in their own neutral expressions using the participant calibration module (Noldus, 2018). The analysis included the in-built emotional categorisation labels included in Noldus (anger, fear, surprise, happiness, sadness and neutral). Facial-emotional recognition of an emotion was defined as a categorical classification of an emotional response up to five seconds post-stimuli offset. Participants were aware that their facial expressions were recorded.
Main experiment: All stimuli were presented on a high frequency LED monitor set at 144 Hz (6.94 ms) and the presentation was created in the Builder and Coder components of PsychoPy v.1.90.02. A total of fifty faces and fifty non-facial pattern blurs were presented during this experiment. The session started with a training stage during which participants familiarised themselves with the keyboard and mouse response components of the experiment. The main experiment started with a fixation cross for two seconds (± one second). After the fixation cross, in random order, a single angry or hostile or fearful or sad or neutral face, or a non-facial pattern blur was presented at fixation for one second followed by a black-and-white pattern mask for one second 1 (see Figure 2). A blank screen interval was then presented for seven seconds. After the interval participants were assigned a gender recognition engagement task. They were asked to choose from an onscreen list the gender of the presented face using the keyboard. The options included (a) male, (s) female and (d) unsure; the key assignment and the order of the list was randomised in each trial. The aim of the engagement task was to ensure stimulus attendance and the responses were not analysed further. After the engagement task a five-second blank screen was presented before the next trial (Berntson, Cacioppo, & Tassinary, 2017, p. 165).
An analysis of variance with independent variables Expression Type (Anger and Hostility) and Emotional Response (anger, fear, happiness, sadness, surprise, disgust and neutral) was run to determine if expressions of anger and hostility elicited different facial-emotional responses during the presentation. The analysis revealed a significant effect of Emotional Response (F (2.72, 76.21) = 84.65; p < .001; η 2 = .75; Greenhouse-Geisser corrected) and a significant interaction (F (2.31, 64.54) = 10.94; p < .001; η 2 = .28; Greenhouse-Geisser corrected). Higher facial-emotional responses for fear were found in response to faces expressing hostility compared to faces expressing anger (p < .001; d = 2.14). Seeing expressions of anger elicited significantly higher facial-emotional responses for anger (p < .001; d = 1.16; see Figure 1). These results suggested that hostile faces were more effective elicitors of physiological arousal and elicited more instances of fearful facial-emotional responses compared to faces expressing anger. Although previous research has suggested that male actors are perceived as conveying more intense anger-related emotions compared to female actors (van der Ploeg et al., 2017), no differences were found between male and female actors for skin conductance (t (28) = 1.05; p = .3; d = .01; S.E. = .01; B = .91), heart rate (t (28)

Stage three: masked emotional presentation
Aims: The aim of this stage was to present angry, hostile, fearful, sad and neutral expressions for 34.72 ms with backward masking to a black-andwhite pattern for 125 ms and measure physiological and facial-emotional responses. Our hypothesis for this stage was that, due to expressive characteristics in hostile faces that indicate higher intent to inflict physical harm, hostile faces will elicit higher physiological arousal than angry faces under conditions of visual suppression. Since the assessment of hostile faces and facial-emotional response analysis using computerised face-reading methods had not been previously undertaken under conditions of backward masking, two secondary questions we explored, in the current stage, were whether we could report evidence for subliminal responses to hostile, and other emotional faces, and whether facial-emotional responses to emotional faces could be reported when using backward masking.
Participants: A power calculation based on medium effect sizes (partial eta-squared = .06; f = .25) and within-subject experimental trial repetitions was performed. The result revealed that twenty-eight participants would be required for P (1-β) ≥ .8 (Faul et al., 2009). Thirty volunteers who were not part of Stages One and Two participated in the current stage. The exclusion criteria were current or previous DSM Axis I or II diagnosis, having a criminal record and suffering or having suffered abuse in their personal life through self-report. The participants were screened with the Somatic and Psychological Health Report Questionnaire, an on-line alexithymia questionnaire and the Aggression Questionnaire. No participants were excluded. The final population sample consisted of thirty participants (fifteen female) with mean age 32.9 (SD = 9.16). The experiment was approved by the Ethics Committee of the School of Psychology of the University of Nottingham.
Procedure: The equipment and the assessment of physiological and facial-emotional responses were identical to Stage Two. All stimuli were presented on a high frequency LED monitor set at 144 Hz (6.94 ms) and the presentation was created in the Builder and Coder components of PsychoPy v.1.90.02. To ensure that brief stimuli would be appropriately presented during the main experiment an IPAD PRO camera with 240 Hz refresh rate (4.17 ms) recorded two pilot runs of the experiment and the stimuli presentation was assessed frame by frame; no instances of dropped frames were detected. A self-developed dropped frame report script with one frame (6.94 ms) tolerance threshold was coded in Python and two pilot experimental diagnostic sessions were run. The presenting monitor reported no dropped frames; prognostic dropped frame rate was estimated at 1/5000 trials. Experimental stages were, subsequently, run using dropped frames diagnostics and per stimuli presentation frame rate performance of the stimuli presenting monitor; no instances of dropped frames were reported.
Main experiment: A total of fifty faces and fifty nonfacial pattern blurs were presented during this experiment. The session started with a training stage during which participants familiarised themselves with the keyboard and mouse response components of the experiment. The main experiment started with a fixation cross for two seconds (± one second). After the fixation cross, in random order, a single angry or hostile or fearful or sad or neutral face, or a nonfacial pattern blur was presented at fixation for 34.72 ms followed by a black-and-white pattern mask for 125 ms (see Figure 2). A blank-screen interval was then presented for seven seconds. After the interval participants were assigned a signal detection engagement task. They were asked by an on-screen message to reply using the keyboard whether they saw a face during the presentation from an onscreen list. The options included (a) yes and (s) no; the key assignment and the order of the list was randomised in each trial. After this task the participants were asked to rate the confidence of their reply from one (not confident at all) to ten (extremely confident) using the mouse and press OK to confirm their choice. After the engagement tasks a fivesecond blank screen was presented before the next trial (Berntson, Cacioppo, & Tassinary, 2017, p. 165).
The same pattern of results was revealed for heartrate responses (F (2.43, 70.57) = 60.89; p < .001; η 2 = .68; Greenhouse-Geisser corrected; see Figure 3). Faces expressing hostility (M = 5.38, SD = 1.53) elicited significantly higher heart-rate responses compared to angry (M = 2.83, SD = 1.55; p < .001; d = 1.66), sad (M = 2.07, SD = .33; p < .001; d = 2.99) and neutral expressions (M = 1.63, SD = .2; p < .001; d = 3.44). No significant differences were revealed between faces expressing hostility and fear (M = 5.05, SD = 1.53; p = .332; d = .36; S.E. = .31; B = .33) for heart-rate responses. Faces expressing fear elicited significantly higher heart-rate responses compared to angry (p < .001; d = 1.44), sad (p < .001; d = 2.69) and neutral expressions (p < .001; d = 3.13). Faces expressing anger elicited significantly higher heart-rate responses compared to neutral expressions (p < .001; d = 1.09). A trend for higher heart-rate responses was revealed in response to angry as compared to sad expressions (p = .014; d = .67). The facial-emotional assessment did not provide significant differences between emotional types (F (1, 29) = .39; p = .53; η 2 = .01), suggesting that the reduction in signal strength under conditions of backward masking caused a reduction in changes in facial-emotional responses when using automatic facial-recognition software. Figure 2. Examples of experimental stimuli sequence. Note: Example of experimental sequence with male face expressing hostility and male face expressing anger; only one target stimuli (angry or hostile or fearful or sad or neutral face or non-facial blur) was shown in each trial; two faces are presented here for illustration purposes. During Stage Two the target stimuli was presented for one second with the pattern mask presented also for one second. During Stage Three, the target stimuli faces were presented for 34.72 ms with the pattern mask presented for 125 ms.
Results and discussion, subliminality: As an exploratory addendum to the main analysis in the current stage we also explored whether we could report evidence for subliminal processing. Hit rate performance per stimulus type was transformed to non-parametric sensitivity index A. A Bayesian analysis with corrected degrees of freedom (Berry, 1996) was run using the Dienes calculator (2016) to assess chance-level processing, with substantial evidence for the null hypothesis defined as a Bayes factor B below 1/3 (chance-level performance) and evidence for the alternate defined as a Bayes factor B above 3 (different to chance-level performance). The intervals were conservatively defined at -.05 (.45; lower bound) and .05 (.55; higher bound) with 0 (A = .5) representing chance level performance. Detection performance using non-parametric receiver operating characteristics was overall above chance (M = .6694; S.D. = .0134; S.E. = .0024; B > 3; for individual stimulus type scores see Figure 4).

Discussion
In the current studies, we initially used a forced categorisation task to identify faces expressing anger or hostility. We then assessed the emotional intensity of these expressions, as well as sub-features of these expressions such as frustration, stress, levels of anger and levels of hostility. While we found that anger and hostility were not different in terms of emotional intensity and perceived anger, angry expressions were rated higher for frustration and tended to be rated higher for stress compared to hostile expressions. On the other hand, hostile expressions were rated higher for hostility. We also found that hostile faces were rated higher for the intent to inflict physical harm. Using computer-based analysis of facial expressions, we found that hostility and anger differed in the expression of facial action units and that hostility included more pronounced head and gaze participant-oriented characteristics. When presenting angry, hostile, fearful, sad and neutral faces both overtly and using backward masking, we found higher physiological arousal (skin conductance, heart rate) in response to faces expressing hostility compared to faces expressing anger. We also found that hostility elicited fearful facial-emotional responses, while anger elicited angry facial-emotional responses.
Previous psychological models have proposed a distinction between state-anger, the response to emotional elicitors that could cause irritation, frustration and anger-related responses, and trait-anger, a more permanent personality characteristic that could lead to more intense and more frequent angerrelated responses, even to innocuous cues. Previous models have also suggested that hostility is an overt or covert intention to physically harm an individual (Deffenbacher, 2000). Due to these differences, hostile expressions could elicit discrete physiological and facial-emotional responses (Vella & Friedman, 2007). In the current study we found that hostility and anger do elicit different physiological emotional  (Zhang & Mueller, 2005) and hit rates performance. In (B), skin conductance and heart scores for reach stimulus type and standardised Cohen's d (measured in units of standard deviation from the overall mean of the presented stimulus types; for full responses and particularly that hostility is a more efficient elicitor for fear and physiological arousal compared to anger. The differences we found between anger and hostility have implications for psychological theory and applied science. A substantial number of previous studies tested the biological preparedness modelthe concept that we respond automatically and involuntarily to environmental dangerusing angry faces (Brooks et al., 2012). The current results suggest that, due to the inclusion of possibly familiar sub-characteristics such as stress and frustration, angry faces could elicit empathetic-mimicking responses, and that they are not necessarily suitable stimuli for the exploration of responses to environmental danger. Hostile faces, on the other hand, were more likely to be emotionally processed as an indication of threat and lead to fearrelated emotional responses. Thus they should be considered as a more suitable candidate for studies that intent to examine automatic and involuntary responses to threat (van der Ploeg et al., 2017). Participants' responses in the current study do not suggest that subliminal processing took place (Brooks et al., 2012). Target meta-awareness, correctly responding that a face was presented, was a necessary condition for physiological responses to masked angry and hostile faces.
During Stage One, we were able to show that hostility and anger do not simply differ quantitatively concerning the level of emotionality of the presented faces. Participants did not rate the two expressions differently for emotional intensity, they rated anger higher for frustration and stress, and critically we found that, whereas hostility involves anger, anger does not necessarily involve hostility. This interpretation is also supported by the report that hostility included differences in the expression of facial action units, such as head and gaze participantoriented characteristics and particularly pronounced facial characteristics related to action units fiftyseven (head forward) and sixty-one (direct eye gaze) that could be interpreted to signify harmful intent. On the other hand, if we were to consider anger and hostility as evolutionary concepts, the argument could be made that hostility could have evolutionary precedence over anger because it expresses survivalvalue characteristics related to physical fight or flight responses (Öhman, 2005). In this manner, our ability to express anger without hostility could be interpreted as a transference of survival-related facial expressions to a social environment in which we are trying to communicate to an audience high-arousal frustration and distress that do not necessary entail the intent to inflict physical harm (Averill, 2012).
These interpretations are presented here as a first attempt to theoretically frame novel findings. In summary we can argue that it is possible that hostility and anger belong to the same basic emotional expression category (Ekman, 1992). However, the current findings point towards the possibility that hostility elicits higher physiological arousal and fearrelated facial-emotional responses possibly due to expressive characteristics that signify potential intent for inflicting harm. These findings should not be misinterpreted to confer the message that anger is a categorically harmlessin terms of intentexpression of emotion. Instead, our findings should be interpreted to suggest that there are qualitative and quantitative differences in sub-characteristics in expressionsoriginally labelled under the basic emotion umbrella of angerthat could indicate whether a face is expressing sufficient hostility-related features to be emotionally appraised as an indication of threat (Ramirez & Andreu, 2006).
From an applied science perspective the current findings could have applications for the development of interpersonal communication strategies and the training of critical decision-making professionals. The current findings could potentially serve as a starting point for further research exploring the applicability and usefulness of evaluating anger and hostility in professional premises to help critical decisionmaking professionals to respond appropriately in interpersonal encounters based on whetherand to what extentan encounter indicates the intention for physical harm against an individual (Rippon, 2000).
These findings are novel and have important implications, but they should be considered as the beginning of a hopefully wider effort to explore whether the currently reported effects can be replicated and extended. For example, although previous research suggests that males and females portray anger differently , we were not able to report gender differences between male and female actors in the current studies. This could be due to extensive and rigorous inclusion criteria and image controls implemented in the population sample and in the facial dataset respectively in the current studies (see also example ; Tsikandilakis, Bali, Derrfuss, & Chapman, 2019b; Tsikandilakis et al., 2019; see also Supplemental Material 3) and could reflect these controls, and not per se the absence of an effect. Along the same lines, we opted to use the current dataset, in the current studies, because the actors were allowed to express emotion subjectively and according to their own lived emotional experiences (Gur et al., 2002, pp. 142-143). Future research could benefit from a replication of the current design using additional facial stimuli sets and particularly, the exploration of whether facial stimuli sets that include action units instructed expressions offer instances of faces that will be labelled as expressing hostility (see for example Ekman, 2007;Tottenham et al., 2009;van der Schalk et al., 2011), or whether the development of a data set including a separate "Hostility" label is necessitated to further advance the current research (Staugaard, 2010). The development of a data set that includes a separate "Hostility" label should be additionally considered because the current findings could reflect a forced-choice subcategorisation between high and low-hostility angry faces. This interpretation is exploratory, in the sense that it is not mentioned or discoursed in previous research, it is not supported by the current data, in the sense that the expression of hostility was not a necessary condition for the discrimination of expressed anger, but it raises seminal relevant questions that should be meticulously considered by further research before we decisively proceed to conceptually categorise hostility as a separate emotion and/or an emotional trait (Fiske, Cuddy, & Glick, 2007; but see also Jack, Garrod, Yu, Caldara, & Schyns, 2012). Finally, additional methods, such as emotional modulation using the eye-blink paradigm (Blumenthal et al., 2005; see also Supplemental Material 5.2) as well as neural assessment of responses to angry and hostile faces, could shed additional light in the behavioural and cognitive processes that could relate to the current findings.

Conclusions
In the current studies we explored whether angry and hostile faces differ in terms of physiological and facialemotional responses. We found that seeing faces expressing hostility elicits higher physiological arousal compared to seeing faces expressing anger. We also found that seeing hostile faces elicits fearrelated responses while seeing angry faces elicit mirroring responses. We found that when angry and hostile faces were presented using backward masking the participants responded with physiological arousal only when they were able to detect the presented faces. Our findings suggest that hostility and anger are not equivalent emotional elicitors and that hostility is possibly a more suitable stimulus for inclusion in studies that explore biological preparedness and responses to threat possibly because it includes expressive indications for physical harm. The current findings should be considered as the first exploratory step for additional replication including additional facial data-sets and additional behavioural and neural assessment methods. Note 1. The black and white pattern mask was included in this stage to make the stimuli sequence identical to Stage Three: Masked Emotional Assessment.