Zero-shot keyword spotting for visual speech recognition in-the-wild

Visual keyword spotting (KWS) is the problem of estimating whether a text query occurs in a given recording using only video information. This paper focuses on visual KWS for words unseen during training, a real-world, practical setting which so far has received no attention by the community. To this end, we devise an end-to-end architecture comprising (a) a state-of-the-art visual feature extractor based on spatiotemporal Residual Networks, (b) a grapheme-to-phoneme model based on sequence-to-sequence neural networks, and (c) a stack of recurrent neural networks which learn how to correlate visual features with the keyword representation. Different to prior works on KWS, which try to learn word representations merely from sequences of graphemes (i.e. letters), we propose the use of a grapheme-to-phoneme encoder-decoder model which learns how to map words to their pronunciation. We demonstrate that our system obtains very promising visual-only KWS results on the challenging LRS2 database, for keywords unseen during training. We also show that our system outperforms a baseline which addresses KWS via automatic speech recognition (ASR), while it drastically improves over other recently proposed ASR-free KWS methods.


Introduction
This paper addresses the problem of visual-only Automatic Speech Recognition (ASR) i.e. the problem of recognizing speech from video information only, in particular, from analyzing the spatiotemporal visual patterns induced by the mouth and lips movement. Visual ASR is a challenging research problem, with decent results being reported only recently thanks to the advent of Deep Learning and the collection of large and challenging datasets [1,2,3].
In particular, we focus on the problem of Keyword Spotting (KWS) i.e. the problem of finding occurrences of a text query among a set of recordings. In this work we consider only words, however the same architecture can be used for short phrases. Although the problem can be approached with standard ASR methods, recent works aim to address it with more direct and "ASR-free" methods [4]. Moreover, such KWS approaches are in line with a recently emerged research direction in ASR (typically termed Acoustics-to-Word) where words are replacing phonemes, triphones or letters as basic recognition units [5,6].
Motivation. One of the main problems regarding the use of words as basic recognition units is the existence of Out-Of-Vocabulary (OOV) words, i.e. words for which the exact phonetic transcription is unknown, as well as words with very few or zero occurrences in the training set. This problem is far more exacerbated in the visual domain where collecting, annotating and distributing large datasets for fully supervised visual speech recognition is a very tedious process. To the best of our knowledge, this paper is the first attempt towards visual KWS under the zero-shot setting.
Relation to zero-shot learning. Our approach shares certain similarities with zero-shot learning methods, e.g. for recognizing objects in images without training examples of the particular objects [7]. Different to [7], where representations of the objects encode semantic relationships, we wish to learn word representations that encode merely their phonetic content. To this end, we propose to use a grapheme-to-phoneme (G2P) encoder-decoder model which learns how to map words (i.e. sequences of graphemes or simply letters) to their pronunciation (i.e. to sequences of phonemes) 1 . By training the G2P model using a training set of such pairs (i.e. words and their pronunciation), we obtain a fixedlength representation (embedding) for any word, including words not appearing in the phonetic dictionary or in the visual speech training set.
The proposed system receives as input a video and a keyword and estimates whether the keyword is contained in the video. We use the LRS2 database to train a Recurrent Neural Network (Bidirectional Long Short-Term Memory, BiL-STM) that learns non-linear correlations between visual features and their corresponding keyword representation [8]. The backend of the network is modeling the probability that the video contains the keyword and provides an estimate of its position in the video sequence. The proposed system is trained end-to-end, without information about the keyword boundaries, and once trained it can spot any keyword, even those not included in the LRS2 training set.
In summary, our contributions are: -We are the first to study Query-by-Text visual KWS for words unseen during training. -We devise an end-to-end architecture comprising (a) a state-of-the-art visual feature extractor based on spatiotemporal Residual Networks, (b) a G2P model based on sequence-to-sequence neural networks, and (c) a stack of recurrent neural networks that learn how to correlate visual features with the keyword representation. -We demonstrate that our system obtains very promising visual-only KWS results on the challenging LRS2 database.

Related Work
Visual ASR. During the past few years, the interest in visual and audiovisual ASR has been revived. Research in the field is largely influenced by recent advances in audio-only ASR, as well as by the state-of-the-art in computer vision, mostly for extracting visual features. In [9], CNN features are combined with Gated Recurrent Units (GRUs) in an end-to-end visual ASR architecture, capable of performing sentence-level visual ASR on a relatively easy dataset (GRID [10]). Similarly to several recent end-to-end audio-based ASR approaches, CTC is deployed in order to circumvent the lack of temporal alignment between frames and annotation files [11,12]. In [1,13], the "Listen, attend and spell" ( [14]) audio-only ASR architecture is adapted to the audiovisual domain, and tested on recently released in-the-wild audiovisual datasets. The architecture is an attentive encoder-decoder model with the decoder operating directly on letters (i.e. graphemes) rather than on phonemes or visemes (i.e. the visual analogues of phonemes [15]). It deploys a VGG for extracting visual features and the audio and visual modalities are fused in the decoder. The model yields state-of-the-art results in audiovisual ASR. Other recent advances in visual and audiovisual ASR involve residual-LSTMs, adversarial domain-adaptation methods, use of selfattention layers (i.e. Transformer [16]), combinations of CTC and attention, gating neural networks, as well as novel fusion approaches [17,18,19,20,21,22,23,24].
Words as recognition units. The general tendency in deep learning towards end-to-end architectures, together with the challenge of simplifying the fairly complex traditional ASR paradigm, has resulted into a new research direction of using words directly as recognition units. In [25], an acoustic deep architecture is introduced, which models words by projecting them onto a continuous embedding space. In this embedding space, words that sound alike are nearby in the Euclidean sense, differentiating it from others word embedding spaces where distances correspond to syntactic and semantic relations [26,27]. In [5,6], two CTC-based ASR architectures are introduced, where CTC maps directly acoustics features to words. The experiments show that CTC word models can outperform state-of-the-art baselines that make use of context-dependent triphones as recognition units, phonetic dictionaries and language models.
In the problem of audio-based KWS, end-to-end word-based approaches have also emerged. In [28], the authors introduce a KWS system based on sequence training, composed of a CNN for acoustic modeling and an aggregation stage, which aggregates the frame-level scores into a sequence-level score for words. However, the system is limited to words seen during training, since it merely associates each word with a label (i.e. one-hot vector) without considering them as sequences of characters. Other recent works aim to spot specific keywords used to activate voice assistant systems [29,30,31]. The application of BiLSTMs on KWS was first proposed in [32]. The architecture is capable of spotting at least a limited set of keywords, having a softmax output layer with as many output units as keywords, and a CTC loss for training. More recently, the authors in [4] propose an audio-only KWS system capable of generalizing to unseen words, using a CNN/RNN to autoencode sequences of graphemes (corresponding to words or short phrases) into fixed-length representation vectors. The extracted representations, together with audio-feature representations extracted with an acoustic autoencoder are passed to a feed-forward neural network which is trained to predict whether the keyword occurs in the utterance or not. Although this audio-only approach shares certain conceptual similarities with ours, the implementations are different in several ways. Our approach deploys a Grapheme-to-Phoneme model to learn keyword representations, it does not make use of autoencoders for extracting representations of visual sequences, and more importantly it learns how to correlate visual information with keywords from low-level visual features rather than from video-level representations.
The authors in [33] recently proposed a visual KWS approach using words as recognition units. They deploy the ResNet feature extractor with us (proposed by our team in [34,35] and trained on LRW [2]) and they demonstrate the capacity of their network in spotting occurrences of the N w = 500 words in LRW [36]. The bottleneck of their method is the word representation (each word corresponds to a label, without considering words as sequences of graphemes). Such an unstructured word representation may perform well on closed-set word identification/detection tasks, but prevents the method from generalizing to words unseen during training.
Zero-shot learning. Analogies can be drawn between KWS with unseen words and zero-shot learning for detecting new classes, such as objects or animals. KWS with unseen words is essentially a zero-shot learning problem, where attributes (letters) are shared between classes (words) so that the knowledge learned from seen classes is transfered to unseen ones [37]. Moreover, similarly to a typical zero-shot learning training set-up where bounding boxes of the objects of interest are not given, a KWS training algorithm knows only whether or not a particular word is uttered in a given training video, without having information about the exact time interval. For these reasons, zero-shot learning methods which e.g. learn mappings from an image feature space to a semantic space ( [38,39]) are pertinent to our method. Finally, recent methods in action recognition using a representation vector to encode e.g. 3D human-skeleton sequences also exhibit certain similarities with our method [40].

System overview
Our system is composed of four different modules. The first module is a visual feature extractor, which receives as input the image frame sequence (assuming a face detector has already been applied, as in LRS2) and outputs features. A spatiotemporal Residual Network is used for this purpose, which has shown remarkable performance in word-level visual ASR [34,35].
The second module of the architecture receives as input a user-defined keyword (or more generally a text query) and outputs a fixed-length representation of the keyword in R de . This mapping is learned by a grapheme-to-phoneme (G2P [41]) model, which is a sequence-to-sequence neural network with two RNNs playing the roles of encoder and decoder (similarly to [42]). The two RNNs interact with each other via the last hidden state of the encoder, which is used by the decoder in order to initialize its own hidden state. We claim that this representation is a good choice for extracting word representations, since (a) it contains information about its pronunciation without requiring the phonetic transcription during evaluation, and (b) it generalizes to words unseen during training, provided that the G2P is trained with a sufficiently large vocabulary.
The third module is where the visual features with the keyword representation are combined and non-linear correlations between them are learned. It is implemented by a stack of bidirectional LSTMs, which receives as input the sequence of feature vectors and concatenates each such vector with the word representation vector.
Finally, the forth module is the backend classifier and localizer, whose aims are (a) to estimate whether or not the query occurs in the video, and (b) to provide us with an estimate of its position in the video. Note that we do not train the network with information about the time intervals keywords occur. The only supervision used during training is a binary label indicating whether or not the keyword occurs in the video, together with the grapheme and phoneme sequences of the keyword.
The basic building blocks of the model are depicted in Fig. 1.

Modeling visual patterns using spatiotemporal ResNet
The front-end of the network is an 18-layer Residual Network (ResNet), which has shown very good performance on LRW [34] [43] as well as on LRS2 [20]. It has been verified that CNN features encoding spatiotemporal information in their first layers yield much better performance in lipreading, even when combined with deep LSTMs or GRUs in the backend [34,9,13]. For this reason, we replace the first 2D convolutional, batch-normalization and max-pooling layers of the ResNet with their 3D counterparts. The temporal size of the kernel is set equal to T r = 5, and therefore each ResNet feature is extracted over a window of 0.2s (assuming 25fps). The temporal stride is equal to 1, since any reduction of time resolution is undesired at this stage. Finally, the average pooling layer of the ResNet output (found e.g. in ImageNet versions of ResNet [43]) is replaced with a fully connected layer. Overall, the spatiotemporal ResNet implements a function where W r denotes the parameters of the ResNet and I t the (grayscale and cropped) frames at time t.
We use a pretrained model on LRW which we fine-tune on the pretrain set of LRS2 using closed-set word identification. The pretrain set of LRS2 is useful for this purpose, not merely due to the large number of utterances it contains, but also sue to its more detailed annotation files, which contain information about the (estimated) time each word begins and ends. Word boundaries permits us to excerpt fixed-duration video segments containing specific words and essentially mimic the LRW set-up. To this end, we select the 2000 most frequently appearing words containing at least 4 phonemes and we extract frame sequences of 1.5sec duration, having the target word in the center. The backend is a 2-layer LSTM (jointly pretrained on LRW) which we remove once the training is completed.
Preprocessing. The frames in LRS2 are already cropped according the bounding box extracted by face detector and tracker [1,2]. We crop the frames further with a fixed set of coefficients C crop = [15,46,145, 125], we resize them to 122 × 122, and we finally feed the ResNet with frames of size 112 × 112, after applying random cropping in training (for data augmentation), and fixed central cropping in testing, as in [34].

Grapheme-to-phoneme models for encoding keywords
Grapheme-to-phoneme (G2P) models are extensively used in speech technologies in order to learn a mapping G → P from sequences of graphemes G ∈ G to sequences of phonemes P ∈ P. Such models are typically trained in a supervised fashion, using a phonetic dictionary, such as the CMU dictionary (for English). The number of different phonemes in the CMU dictionary is equal to N phn = 69, with each vowel contributing more than one phoneme, due to the variable level of stretching. The effectiveness of a G2P model is measured by its generalizability, i.e. by its capacity in estimating the correct pronunciation(s) of words unseen during training.
Sequence-to-sequence neural networks have recently shown their strength in addressing this problem [41]. In a sequence-to-sequence G2P model, both sequences are typically modeled by an RNN, such as an LSTM or a GRU. The first RNN is a function r = f e (G, W e ) parametrized by W e , which encodes the grapheme sequence in a fixed-size representation r|G, where r ∈ R dr , while the second RNN estimates the phoneme sequenceP = f d (r, W d ). The representation vector is typically defined as the output of the last step, i.e. once the RNN has seen the whole grapheme sequence.
Our implementation of G2P involves two unidirectional LSTMs with hidden size equal to d l = 64. Similarly to sequence-to-sequence models for machine translation (e.g. [42]), the encoder receives as input the (reversed) sequence of graphemes and the decoder received c e,T and the output h e,T from the encoder (corresponding to the last time step t = T ) to initialize its own state, denoted by c d,0 and h d,0 . To extract the word representation r, we first concatenate the two vectors, we then project them to R dr to obtain r and finally we re-project , where x t denotes the transpose of x. For the projections we use two linear layers with square matrices (since d r = 2d l ), while biases are omitted for having a more compact notation.
The G2P model is trained by minimizing the cross-entropy (CE) between the true P * and posterior probability over sequences P (P t |G), averaged across time steps, i.e.
Since the G2P model is trained with back-propagation, its loss function can be added as auxiliary loss to the primary KWS loss function and the overall architecture can be trained jointly. Joint training is highly desired, as it enforces the encoder to learn representations that are optimal not merely for decoding, but for our primary task, too. During evaluation, the mapping G → z learned by the encoder is all that is required, and therefore the decoder f dec (·, W dec ) and the true pronunciation P * are not required for KWS.

Stack of BiLSTMs, binary classifier and loss function
The backend of the model receives the sequence of visual features X = {x t } T t=1 of a video and the word representation vector r and estimates whether the keyword is uttered by the speaker.
Capturing correlations with BiLSTMs. LSTMs have exceptional capacity in modeling long-term correlations between input vectors, as well as correlations between different entries of the input vectors, due to the expressive power of their gating mechanism which controls the memory cell and the output [44]. We use two bidirectional LSTM (BiLSTM), with the first BiLSTM merely applying a transformation of the feature sequence X → Y, i.e. and dropout mask for all feature vectors of the same sequence [45]. The outputs vectors of the first BiLSTM are concatenated with the word representation vector to obtain y + t = [y t t , r t ] t . After applying batch-normalization to y + t , we pass them as input to the second BiLSTM, with equations defined as above, resulting in a sequence of output vectors denoted by Z = {z t } T t=1 , where z t ∈ R dv . Note the equivalence between the proposed frame-level concatenation and keyword-based model adaptation. We may consider r as a means to adapt the biases of the linear layers in the three gates and the input to the cell, in such a way so that the activations of its neurons fire only over subsequences in Z that correspond to the keyword encoded in r.
Feed-Forward Classifier for network initialization. For the first few epochs, we use a simple feed-forward classifier, which we subsequently replace with a BiLSTM backend discussed below. The outputs of the BiLSTM stack are projected to a linear layer (d v , d v /2) and are passed to a non-linearity (Leaky Rectified Linear Units, denoted by LReLU) to filter-out those entries with negative values, followed by a summation operator to aggregate over the temporal dimension, i.e. v = T t=1 LReLU(W t z t ). After applying dropouts to v we project them to a linear layer (d v /2, d v /4) and we apply again a LReLU layer. Finally, we apply a linear layer to drop the size from d v /4 to 1 and a Sigmoid layer, with which we model the posterior probabilities that the video contains the keyword or not, i.e. P l|{I} T t=1 , G , where l ∈ {0, 1} the binary indicator variable and l * its true value.
BiLSTM Classifier and keyword localization. Once the network with the Feed-Forward Classifier is trained, we replace it with an BiLSTM classifier. The latter does not aggregate over the temporal dimension, because it aims to jointly (a) estimate the posterior probability that the video contains the keyword, and (b) locate the time step that the keyword occurs. Recall that the network is trained without information about the actual time intervals the keyword occurs. Nevertheless, an approximate position of the keyword can still be estimated, even from the output of the BiLSTM stack. As Fig. 2 shows, the average activation of the input of the BiLSTM classifier (after applying the linear layer and ReLU) exhibits a peak, typically within the keyword boundaries. The BiLSTM Classifier aims to model this property, by applying max(·) and argmax(·) in order to estimate the posterior that the keyword occurs and localize the keyword, respectively. More analytically, the BiLSTM Classifier receives the output features of the BiLSTM stack and it passes them to a linear layer W of size (d v , d s ) where d l = 16, and to a LReLU, i.e. s t = LReLU(W t z t ). The BiLSTM is then applied on the sequence, followed by a linear layer (which drops the dimension from 2d s to 1, i.e. a vector w and a bias b), the max(·) and finally the sigmoid σ(·) from which we estimate the posterior. More formally, H = BiLSTM(S), y t = w t h t + b and p = σ(max(y)),t = argmax(y) where , G (i.e. the posterior that the keyword defined by G occurs in the frame sequence {I t } T t=1 ), and t is the time step where the maximum occurs, and should be somewhere within the actual keyword boundaries. Note that we did not succeed in training the network with the BiLSTM Classifier from scratch, probably due to the max(·) operator.
Loss for joint training. The primary loss is defined as: while the whole model is trained jointly by minimizing a weighted summation of the primary and auxiliary losses, i.e.
where α w a scalar for balancing the two losses. It is worth noting that the representation vectors r and the encoder's parameters receive gradients from both loss functions, via the decoder of the G2P model and the LSTM backend. Contrarily, the decoder and the binary classifier receive gradients only from L w (·, ·) and L v (·, ·), respectively.

Training the model
In this section we describe our recipe for training the model. We explain how we partition the data, how we create minibatches, and we give details about the optimization parameters.

LRS2 and CMU Dictionary partitions
We use the official partition of the LRS2 into pretrain, train, validation and test set. The KWS network is trained on pretrain and train sets. The pretrain set is also used to fine-tune the ResNet, as we discuss in Section 3.2. The G2P model is trained from scratch and jointly with the whole KWS network. LRS2 contains about 145K videos of spoken sentences from BBC TV (96K in pretrain, 46K in train, 1082 in validation, and 1243 in test set). The number of frames per video in the test set varies between 15 and 145.
In terms of keywords, we randomly partition the CMU phonetic dictionary into train, validation and test words (corresponding to 0.75, 0.05 and 0.20, respectively), while words with less that n p = 4 phonemes are removed. Finally, we add to the test set of the dictionary those words we initially assigned to the training and validation sets that do not occur in the LRS2 pretrain or train sets, since they are not used in any way during training. For the first 20 epochs we use (a) only the train set of LRS2 (because it contains shorter utterances and much fewer labeling errors compared to the pretrain), (b) n p = 4 and α w = 1.0 (i.e. minimum number of phonemes and weight of auxiliary loss, respectively), and (c) the simple feed-forward backend. After the 20th epoch (a) we add the pretrain set, (b) we set n p = 6 and α w = 0.1, and (c) we replace the backend with the BiLSTM-based (all network parameters but those of the backend are kept frozen during the 21st epoch).

Optimization
The loss function in eq. (5) is optimized with backpropagation using the Adam optimizer [46]. The number of epochs is 100, the initial learning rate is 2 × 10 −3 and we drop it by a factor of 2 every 20 epochs. The best model is chosen based on the performance on the validation set. The implementation is based on PyTorch and the code together with pretrained models and ResNet features will be released soon. The number of videos in each minibatch is 40, however, as explained in Section 4.2, we create multiple training examples per video (equal to twice the number of training keywords it contains). Finally, the ResNet is optimized with the configuration suggested in [34].

Experiments
We present here the experimental set-up, the metrics we use and the results we obtain using the proposed KWS model. Moreover, we report baseline results using (a) a visual ASR model with a hybrid CTC/attention architecture, and (b) an implementation of the ASR-free KWS method recently proposed in [4].

Evaluation metrics and keyword selection
KWS is essentially a detection problem, and in such problems the optimal threshold is application-dependent, typically determined by the desired balance between the false alarm rate (FAR) and the missed detection rate (MDR). Our primary error metric is the Equal Error Rate (EER), defined as the FAR (or MDR) when the threshold is set so that the two rates are equal. We also report MDR for certain low values of FAR (and vice versa) as well as FAR vs. MDR curves.
Apart from EER, FAR and MDR we evaluate the performance based on ranking measures. More specifically, for each text query (i.e. keyword) we report the percentage of times the score of a video containing the query is within the Top-N scores, where N ∈ {1, 2, 4, 8}. Since a query q may occur in more than one videos, a positive pair with score s q,v is considered as Top-N if the number of negative pairs associated with the given query q with score higher than s q,v is less than The evaluation is performed by creating a list of single-word queries, containing all words appearing in the test utterances and having at least 6 phonemes. Keywords appearing in the training and development sets are removed from the list. The final number of queries in the list is N q = 635. Each query is scored against all N test = 1243 test videos, so the number of all pairs is N q N test = 789305. The number of positive pairs is N p = |{q, v|l q,v = 1}| = 873, and N p > N q because some keywords appear in more than one videos.

Baseline and proposed networks
CTC/Attention Hybrid ASR model. We present here our baseline obtained with a ASR-based model. We use the same ResNet features but a deeper (4layer) and wider (320-unit) BiLSTM. The implementation is based on the opensource ESPnet Python toolkit presented in [47] using the hybrid CTC/attention character-level network introduced in [48]. The system is trained on the pretrain and train sets of LRS2, while for training the language model we also use the Librispeech corpus [49]. The network attains WER = 71.4% on the LRS2 test set. In decoding, we use the single step decoder beam search (proposed in [48]) with |H| = 40 number of decoding hypotheses h ∈ H. Similarly to [50], instead of searching for the keyword only on the best decoding hypothesis we approximate the posterior probability that a keyword q occurs in the video v with feature sequence X as follows: where 1 [q∈h] is the indicator function that the decoding hypothesis h contains q, s h is the score (log-likelihood) of hypothesis h (combining CTC and attention [48]) and c = 5.0 is a fudge factor optimized in the validation set. Baseline with video embeddings. We implement an ASR-free method that is very close to [4] proposed for audio-based KWS. Different to [4] we use our LSTM-based encoder-decoder instead of the proposed CNN-based. A video embedding is extracted from the whole utterance, is concatenated with the word representation and fed to a feed-forward binary classifier as in [4]. This network is useful in order to emphasize the effectiveness of our frame-level concatenation.
Proposed Network and alternative Encoder-Decoder losses. To assess the effectiveness of the proposed G2P training method, we examine 3 alternative strategies: (a) The encoder receives gradients merely from the decoder, which is equivalent to training a G2P network separately, using only the words appearing in the training set. (b) The network has no decoder, auxiliary loss or phoneme-based supervision, i.e. the encoder is trained by minimizing the primary loss only. (c) A Grapheme-to-Grapheme (G2G) network is used instead of a G2P. The advantage of this approach over G2P is that it does not require a pronunciation dictionary, i.e. it requires less supervision. The advantage over the second approach is the use of the auxiliary loss (over graphemes instead of phonemes), which acts as a regularizer.

Experimental Results on LRS2.
Our first set of results based on the detection metrics are given in Table 1. We observe that all variants of the proposed network attain much better performance compared to video embeddings. Clearly, video-level representations cannot retain the fine-grained information required to spot individual words. Our best network is the proposed Joint-G2P network (i.e. KWS network jointly trained with G2P), while the degradation of the network when graphemes are used as targets in the auxiliary loss (Joint-G2G) underlines the benefits from using phonetic supervision. Nevertheless, the degradation is relatively small, showing that the proposed architecture is capable of learning basic pronunciation rules even without phonetic supervision. Finally, the variant without a decoder during training is inferior to all other variants (including Joint-G2G), showing the regularization capacity of the decoder. The FAR-MDR tradeoff curves are depicted in Fig. 3(a), obtained by shifting the decision threshold which we apply to the output of the network. The curves show that the proposed architecture with G2P and joint training is superior to all others examined and in all operating points. Finally, we omit results obtained with the ASR-based model as the scoring rule described in eq. (6)- (7) is inadequate for measuring EER. The model yields very low FAR (≈ 0.2%) at the cost of very high MDR (≈ 63%) of all reasonable operating points.
Length of keywords and camera view. We are also interested in examining the extent to which the length of the keyword affects the performance. To this end, we increase the minimum number of phonemes from n p = 6 to 7 and 8. Moreover, we evaluate the network only on those videos labeled as Near-Frontal  (NF) view, by removing those labeled as Multi-View (the labeling is given in the annotation files of LRS2). The results are plotted in Fig. 3(b). As expected, the longer the keywords, the lower the error rates. Moreover, the performance is better when only NF views are considered.
Ranking measures and localization accuracy. We measure here the percentage of times videos containing the query are in the top-N scores. The results are given in Table 2. As we observe, our best system scores Top-1 equal to 34.14% meaning that in about 1 out of 3 queries, the video containing the query is ranked first amongst the N test = 1243 videos. Moreover, in 2 out of 3 queries the video containing the query is amongst the Top-8. The other training strategies perform well, too, especially the one where the encoder is trained merely with the auxiliary loss (G2P-only). The ranking measures attained by the Video-Embedding method are very bad so we omit them. The ASR-based system attains relatively high Top-1 score, however the rest of the scores are rather poor. We should emphasize though that other ASR-based KWS methods exist for approximating the posterior of a keyword occurrence, e.g. using explicit keyword lattices [51], instead of using the set of decoding hypotheses H created by the beam search in eq. (6)- (7).
Finally, we report the localization accuracy for all versions of the proposed network, defined as the percentage of times the estimated locationt lies within the keyword boundaries (±2 frames). The reference word boundaries are estimated by applying forced alignment between the audio and the actual text. We observe that although the algorithm is trained without any information about the location of the keywords, it can still provide a very precise estimate of the location of the keyword in the vast majority of cases.

Conclusions
We proposed an architecture for visual-only KWS with text queries. Rather than using subword units (e.g. phonemes, visemes) as main recognition units, we followed the direction of modeling words directly. Contrary to other word-based approaches, which treat words merely as classes defined by a label (e.g. [35]), we inject into the model a word representation extracted by a grapheme-to-phoneme model. This zero-shot learning approach enables the model to learn nonlinear correlations between visual frames and word representations and to transfer its knowledge to words unseen during training. The experiments showed that the proposed method is capable of attaining very promising results on the most challenging publicly available dataset (LRS2), outperforming the two baselines by a large margin. Finally, we demonstrated its capacity in localizing the keyword in the frame sequence, even though we do not use any information about the location of the keyword during training.

Acknowledgements
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 706668 (Talking Heads). We are grateful to Dr. Stavros Petridis and Mr. Pingchuan Ma (i-bug, Imperial College London) for their contribution to the ASR-based experiments.