Responsible Domestic Robotics: Exploring Ethical Implications of Robots in the Home

The vision of robotics in the home is driven by increased convenience, comfort, companionship, and greater security for users. However, if robots are not being developed in a responsible manner, then the robot industry risks causing harm to users, being rejected by users, or being regulated in overly prescriptive ways. There is a need to create more socially responsible robotics and, in this paper, we explore some of the challenges and requirements for this, both conceptually and empirically. To do this, firstly, we explore the emergence of robots in the home, examining definitions of robotics and the current commercial state of the art. In particular, we consider emerging technological trends, such as smart homes, that are already embedding computational agents in the fabric of everyday life. By turning to human computer interaction, particularly notions of values in design, we unpack the importance of user centric design and the home as a deployment setting for domestic robotics. Subsequently, we consider the nature of responsibility in robotics, examining what it means and consider lessons from past home information technologies. In this paper, we look at a specific responsibility, namely that of roboticists to ensure they engage with user concerns, needs, and respond to them appropriately in design. In wider IT design, this often does not occur sufficiently, leading to technologies that are not fit for purpose and disrupt the social order of the home. Working from this basis, we then present findings from an exploratory, qualitative survey we conducted to highlight concerns users have about domestic robots. The survey established a range of themes, but we focus on the form of robots, privacy concerns and many aspects of trust. To explore these in more depth, we then analyse relevant literature from across technology law, computer ethics and computer science, to reflect on how these concerns are discussed there. We conclude by drawing together both our empirical observations and conceptual analysis, considering what is needed for the future of responsible domestic robotics: user centric design.


Introduction
The vision of robotics in the home promise increased convenience, comfort, companionship, and greater security for users. However, if robots are not being developed in a socially responsible manner, then the robot industry risks causing harm to users, being rejected by society at large, or being regulated in overly prescriptive ways. There is a need to create more socially responsible domestic robotics and, in this paper, we explore some of the challenges and requirements for this, both conceptually and empirically.
Fears of robot uprisings are peppered throughout decades of science fiction literature and film (Higbie, 2013). However, visions of technological futures often say more about the period they were written in, than actually forecasting what futures might emerge (Reeves, 2012), as we have seen with computer science research into 'ubicomp' (Bell & Dourish, 2006). Whilst popular science and cultural visions of robots may not have fully emerged, computational agents have most definitely left the lab and are entering our homes in a variety of forms. The 'internet of things' is incrementally making homes 'smarter' by embedding networked, ambient technologies with varying degrees of autonomy into the physical and social fabric of domestic life. These devices can be for security (smart CCTV and locks), comfort (smart bulbs and thermostats) and entertainment (conversational agents in smart speakers). These artefacts may not all be 'robots' in the popular sense of the word, but they are restructuring interactions, social order and relationships in the home. As domestic service robot technologies advance and become more commercially accessible, the smart home will have already changed the domestic setting and laid the groundwork for robots to enter. Accordingly, they need to learn from mistakes being made with smart homes, including being designed in more user centric ways. We need to understand user concerns and respond to these accordingly, to create a more sustainable domestic robot future.
To unpack this argument, the paper structure is as follows.
-Firstly, we explore different definitions of robots to show the contested nature of the term. We do this to better understand the backdrop of robots emerging in the home, ushered by the commercial smart home trend.
-Secondly, we will turn to human computer interaction, particularly notions of values in design, to stress the importance of user centric design and why the home as a deployment setting for domestic robotics heightens the need for this design approach.
-Thirdly, we consider the nature of responsibility for robotics, examining what it means for domestic robotics.
-Fourthly, linking together our analysis on HCI to responsibility, we argue that a specific responsibility of roboticists is to ensure they engage with user concerns, needs, and respond to them appropriately in design. In wider IT design, this often does not occur sufficiently, leading to technologies that are not fit for purpose and disrupt the social order of the home in problematic ways.
-Fifthly, we then present findings from a small scale exploratory, qualitative survey we conducted to understand concerns users have about domestic robots in 2018.
-Finally, we focus in detail on issues around trust, as one of the key challenges, considering what challenges lay ahead for responsible domestic robotics.

Definitions
Settling on definitions of what domestic robots are is no easy task. We briefly look at technical, legal, and philosophical perspectives below, before advancing how we frame them for this paper.
Standards are a good place to start navigating a domain, as they can show what multiple stakeholder consensus is around a topic. The International Federation of Robotics/ United Nations Economic Commission for Europe were influential in classifying robots, culminating in the ISO standard 8373:2012 on Robots and Robotic devices. This standard differentiates between, among others, industrial, mobile, service, personal service and professional service robots. According to them, a robot is "an actuated mechanism programmable in two or more axes with a degree of autonomy, moving within its environment, to perform intended tasks. Autonomy in this context means the ability to perform intended tasks based on current state and sensing, without human intervention" (ISO 8373, s2.08). We are most interested in 'service robots' i.e. a 'robot that performs useful tasks for humans or equipment excluding industrial automation applications' (ISO 8373,s2.10) and particularly the sub category of 'personal service robot' as these are most likely to be deployed in the domestic setting. These are defined as "service robots for personal use…used for a non-commercial task, usually by lay persons… (i.e.) domestic servant robot, automated wheelchair, personal mobility assist robot, and pet exercising robot" (ISO 8373: s2.11). As we can see, these definitions foreground the materiality of the artefact (i.e. being able to actuate physically), the varying degrees of autonomy they possess to shape the environment, the relationship of utility to humans, and the split between industrial and personal.
If we look more widely, by turning to academic sources we see robots framed slightly differently. For (Mataric, 2007, p. 2) "a robot is an autonomous system which exists in the physical world, can sense its environment, and can act on it to achieve some goals". Bryson and Winfield state robots are "artefacts that sense and act in the physical world in real time" and they state a smartphone counts as a robot as it can sense when its falling or orientation changes (Bryson & Winfield, 2017, p. 117). Both definitions encapsulate the ability to act in the physical world, but don't necessarily prescribe the robots as being physical themselves. In providing a more design orientated definition for domestic robots, (Bartneck & Forlizzi, 2004) highlight the interactional aspects, stating "a domestic robot is an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioural norms expected by the people with whom the robot is intended to interact' ' (p2). This definition foregrounds the interactional aspect, and particularly to what extent robots fit into pre-existing norms and contexts. All of the above perspectives feature in EU legal discussions around civil liability for robots, which recommend defining 'smart robots' by focusing on the attributes of " -the capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data; -the capacity to learn through experience and interaction; -the form of the robot's physical support; -the capacity to adapt its behaviour and actions to the environment." (European Parliament, 2017, p. 18) However, in defining robots, neatly separating them from interactive AI becomes a challenge e.g. human-agent collectives, IBM Watson, Google Duplex, DeepMind AlphaGo etc. Whilst some definitions above focus on the physicality of robots, they do not exclude non-physical, more ethereal robots that actuate in the real world. Given the current trend towards smart homes with integration of more ethereal devices not providing physical interactions, but cognitive support, there is a case for considering interactive AI too. This includes search functionality (conversational agents in different devices, like Amazon Alexa), heating management (smart thermostats like Nest) or observational security of space (Nest Cam).
It is worth briefly reflecting on definitions of AI as digital artefacts that have intelligence, i.e. 'capacity to perceive contexts for action…to act…to associate contexts to actions' using techniques like speech or pattern recognition (Bryson & Winfield, 2017, p. 117). This wider framing encapsulates many of the IoT technologies we are seeing in the home. Accordingly, in this paper we consider interactive artificial intelligence in addition to more material framings of robots.

Commercial State of the Art
Irrespective of definitions, we are observing technologies with varying degrees of agency and artificial intelligence emerging for the domestic setting. To better understand this trend and contextualise our discussions, we now explore some statistics and examples of domestic service robots.
The number of service robots in the home is growing at an impressive rate globally. The International Federation of Robotics 2017 Report on World Robotics (International Federation of Robotics, 2017) states there was an annual increase in sales of personal and domestic service robotics from 2015-2016 of 24% to roughly 6.7m robots, with the market valued at US$2.6bn (International Federation of Robotics, 2017, p. 14). Interestingly, US companies dominate as manufacturers of domestic service robots, whereas 94% of elderly/handicap assistance bots come from Asian/Australian companies (IFR, 2017. p18).
Commercial State of Art: If we turn to the state of the current domestic service robot market, we observe both start-ups and major manufacturers offerings. Companies creating new robot products, include Honda's 3E family of modular robot platforms for assisting with mobility and even sports training (Honda, 2018); Panasonic's desktop companion robot complete with child-like voice to 'add realism' (Panasonic, 2017); and Bosch's Mykie which helps with cooking and projecting recipes onto the wall (Clark Thompson, 2017). Humanoid robots are hitting the market too, performing tasks as diverse as conducting funerals (Gibbs, 2017), teach yoga (Ubtech Robotics Lynx (Gebhard, 2018)) and personal videography (Mayfield Robotics Kuri (Kuri, 2018)).
For macro level insights, we return to (IFR 2017) which states main domestic service robots sold, more than 4.6 million sold in 2016, are for "vacuum and floor cleaning, lawn-mowing robots, and entertainment and leisure robots, including toy robots, hobby systems, education and research" (p14). Some interesting examples we found in our own research, as of May 2018 are presented below. This table is, of course, not exhaustive. -B. Miso Robotics Kitchen Assistants -burger flipping robots that detect when reach desired temperature.
-C. Starship -food delivery robots that were tested in San Francisco, but pedestrians don't like them hugely.
-D. Robot Barista -Automated processing of drinks orders in a known environment trialled in Japan. -A. Amazon Echo -Smart speakers developed by Amazon. Uses voice-controlled intelligent personal assistant, Alexa and is capable of playing audiobooks, music playback, setting alarms, making todo lists, streaming podcasts, and providing real-time information such as weather and rail times. It can also control several other smart home devices such as heating and lighting.
-B. Google Home -Similar to Alexa, Google Home speakers enable users to speak voice commands to interact with services through Google's intelligent personal assistant called Google Assistant. Offering the same breadth of services as above.
-C. HomePod -Again, this smart speaker developed by Apple Inc. uses Apple's own smart assistant, Siri, to control the speaker and other HomeKit devices. This can connect to all Apple based product such as iPhones and through to services such as iTunes.
-D. Invoke -A further iteration of the smart speaker, this time utilizing Microsoft's intelligent personal assistant, Cortana. Effectively providing all of the same services but to a Microsoft user. We now consider perspectives from HCI on the importance of the home as a deployment setting for domestic robots.

The Importance of the Home
One of our key arguments is the growth in domestic internet of things technologies, the so called smart home, is paving the way for domestic robots. However, the process of integrating IoT into the home has impacts for residents living with these devices. We are already observing a growing interface between domestic IoT and robots. Robots that manage and speak to IoT devices act as interfaces for users and intermediaries for services, providing more intuitive interactions between user and devices e.g.
LGs Cloi (an expressive bot with eyes, that doesn't always work as promised Kelion, 2018)). Companies such as Amazon, already established in homes through Alexa/Echo, have robotics aspirations too (Gurman & Stone, 2018). Hence, we need to learn from the mistakes that are currently being made in terms of responsibility for privacy, security and trust with IoT (Brown, 2015), to ensure these are not being replicated for robots.
Like robots, IoT has moved from the lab to the home and the consumer market has grown hugely in recent years (Cisco, 2013;Panetta, 2017). Ownership of domestic IoT devices is anticipated to rise significantly, with the OECD predicting by 2022, a family of four will own 2 connected cars, 7 smart light bulbs, 5 internet connected power sockets, 1 intelligent thermostat and so on (OECD Working Party on Communication Infrastructures and Services Policy, 2013). Whilst these may be optimistic, we are no longer constrained to visions like Weiser's ubiquitous computing (Weiser, 1993) or Philips Ambient Intelligence (Aarts & Marzano, 2003), an IoT future is here (just perhaps not the one originally envisioned of invisible computers and seamless networking (Bell & Dourish, 2006)). However, many market offerings are for goods or services individuals do not know they even want or need (Lee, Choi, & Kim, 2017 (Leppënen & Jokinen, 2003). (Wilson, 2015) found that whilst benefits like increased efficiency, comfort, convenience, energy management, care, security is promised, designers need to look at "how the use and meaning of technologies will be socially constructed and iteratively negotiated, rather than being the inevitable outcome of assumed functional benefits" (p466), because homes are 'internally differentiated, emotionally loaded, shared and contested places' (p470).
Numerous studies examine how individuals live with smart domestic technologies. (Brush et al., 2011) US study found user's frustration caused by unreliability, devices requiring iterative tweaking over time, and security concerns about unauthorised remote access (particularly for locks and home cameras). Users still desired such technologies, and in a recent study, on user perceptions of privacy risks in IoT, they find users still purchase these devices, despite privacy concerns, showing the privacy paradox continues with IoT (Williams, Nurse, & Creese, 2017). (Mäkinen, 2016) found internal tensions for 13 residents around trade-offs with home surveillance systems in Finland, for example balancing a sense of safety and protection of the home against fear of being watched without knowledge or implications of monitoring other home occupants, such as perceived spying (p75). More recently, (Coskun, Kaner, & Bostan, 2018) explored reasons why smart home technologies don't have greater uptake. They found elements like users want smart home technology to take over chores, but not for automation to interfere with pleasurable activities, such as cooking, or going beyond comfort to improving skills such as cooking. This shows the contested nature of domestic life, and the need to respond to context and users through user centric design approaches. These lessons from smart homes could inform domestic roboticists to and support design of systems users actually want.

The Challenges of Domestic Robots.
Robots pose numerous ethical challenges for privacy rights, security management, trust relationships, identity formation and limitations on user autonomy (Coeckelbergh, 2012;Leenes & Lucivero, 2014). As IoT paves the way for domestic robots, we already see the security and privacy vulnerabilities it can bring. This can be unintended, such as publicly accessible unsecured IoT devices with video feeds enabling data to move outside of contextually appropriate boundaries of the home (Nissenbaum, 2009;Osborne, 2016;Wetmore, 2018). Similarly, it can be intended, driven by business models of data repurposing, such as Roomba selling floor plans of user homes (Jones, 2017). Private practices are often made visible in the process of human robot interaction, and data about these practices is used as a resource in the provision of new value-added services with robots, hence perceptions of robots and interactive AI monitoring for surveillance are established (Calo, 2010;Schafer & Edwards, 2017;Sharkey & Sharkey, 2012). Inferences about behaviour based on social sorting (Lyon, 2003) of data doubles (D. Haggerty, Richard V. Ericson, 2000) can be used for social control and manipulation, with the home setting making intimate behaviours observable and auditable. Opacity around the ecosystem of stakeholders interested in knowing how users live makes it hard to know if and why they are being watched: is it to monetise, police or manage their actions? Examples of products being pulled because of privacy concerns, particularly with child users e.g. Mattel Aristotle (Hern, 2017), highlight the public perception of such risks.
To better understand public perceptions, we observe EU wide citizen attitudes to living with robots. A 2015 Eurobarometer study on Autonomous systems (EU, 2015) found "Eight in ten Europeans (82%) who use robots think well of them, while nine in ten (90%) among them would purchase one". A more recent 2017 Eurobarometer study (EU, 2017) of c28,000 EU citizens, states that 35% would be comfortable with robot support at work or delivering goods, but only 26% when it is for companionship or services when elderly/infirm or for performing an operation. They find, "overall, 88% of respondents agree robots and artificial intelligence are technologies that require careful management." (EU, 2017). Accordingly, we now engage with the question of responsibility to such careful management, focusing on one stakeholder group in particular: roboticists.

The Nature of Responsibility and the Role of Roboticists.
Responsibility is a loaded concept, having different meanings for different communities, morally, legally, and societally. We are particularly interested in the responsibilities of roboticists, as opposed to robots themselves. The influential EPSRC Principles of Robotics recognises this, targeting their principles towards designers, builders and users of robots. They argue "Robots are simply tools of various kinds, albeit very special tools, and the responsibility of making sure they behave well must always lie with human beings." (Boden et al., 2017, p. 125) Accordingly, we focus on the responsibilities of roboticists, by turning to existing work on the how innovators consider their wider responsibilities to society. The case for engineers and developers duty to look beyond function is that "engineering design is an inherently moral activity" (Verbeek, 2006, p. 368) and "in effect, engineers ought to be considered de facto policymakers, a role that carries implicit ethical duties" (Millar, 2008, p. 4). In foregrounding the needs of users in this, Shneiderman (1990) called on 'researchers, system designers, managers, implementers, testers and trainers of user interfaces and information systems' to exert influence, moral leadership and responsibility to find "ways to enable users to accomplish their personal and organisational goals whilst pursuing higher societal goals and serving human needs" (Shneiderman, 1990, p. 2).
In supporting such responsibilities, we need tools. One popular approach is codes of ethics. Professional bodies such as the Association of Computing Machinery (ACM), Institute of Electrical and Electronic Engineers (IEEE) and British Computing Society (BCS) have provide general guidance for members for many years (IEEE, 1963, ACM, 1992 But it also includes principles on designing robots to respect fundamental rights, the precautionary principle, inclusiveness, accountability, safety, reversibility, privacy, and maximising benefit/minimising harm. Whilst all can be important for the home, the note on privacy are particularly interesting as they highlight the need for designers, particularly around obtaining valid consent prior to man-machine interactions. This is a clear challenge for human robot interactions, communicating sufficient information to users in a transparent, temporally sensitive manner.

Whilst these commitments to high level principles show a direction of travel, there is need for operationalisation and strategies to embed such values in design. A Responsible
Research and Innovation approach is important here, as it focuses on practical reflection and interaction with a range of stakeholders, to ensure stewardship for the future (Von Schomberg, 2011, p9). Reflexivity of designers on their position, knowledge and impact (Grimpe et al, 2014) is key, but managing such responsibilities can be hard. For example, a spectrum of ICT researchers interviewed for the FRRIICT project, stated they had less sense of responsibility for social consequences when conducting 'fundamental', 'generic', 'enabling' research than those closer to or working with specific end user groups, doing more 'applied', 'application orientated' work (p209-211). Accordingly, different navigating responsibility within roboticist communities is fragmented by role.
Drawing on its role in human computer interaction, the role for human values in design is growing, as the 'third wave' of HCI widens the field to consider cultural, societal aspects of computing, as opposed to purely functional aspects (Bødker, 2015). Authors such as (Flanagan, Howe, & Nissenbaum, 2008;Nissenbaum, 2005;Sellen, Rogers, Harper, & Rodden, 2009) highlight the need for bringing valued into design. As (Sellen et al., 2009, p. 66) argue for user centricity in this process, stating, "HCI must also take into account the truly human element, conceptualising 'users' as embodied individuals who have desires and concerns and who function within a social, economic and political ecology". In bringing values into design, 'value sensitive design' (VSD) (Friedman, Hendry, & Borning, 2017;Friedman, Kahn, & Borning, 2008) (Steen, 2011) are critical to understanding the real needs and values users want in domestic robotics.
There is growing recognition of the importance of these concepts in robotics too, as one prominent definition of human robot interactions states, there is need to 'meet the social and emotional needs of their individual users as well as respecting human values" (Dautenhahn, 2018). Furthermore, there are already examples of use of VSD for robotics, particularly care robots (van Wynsberghe, 2013b, 2013a).

Presenting Our Survey
The survey was constructed to establish views and concerns of the general public around the emergence of domestic robots. It was informed by existing literature, from a breadth of disciplines considering legal and ethical matters around robots e.g. law, philosophy, computer science, engineering science fiction. We adopted a broad view of 'domestic robots' to include interactive artificial intelligence, to capture existing new technology such as Alexa and other personal assistants that respondents may have experience of using. The survey was approved by University of Nottingham Ethics process, ran from 6th -28th March 2018, and was shared primarily though social media channels (through Twitter, Facebook and Reddit). Of the 43 respondents to this survey, 18 were identifiably male and 18 female, 1 non-binary and 6 remained anonymous. There was an age spread from teenage to over 70 years old with a concentration from 20-45 years old. The survey was broken into 3 broad themes, namely, general feelings & experience with robots; trust and interaction; future thinking. This enabled us to establish current understanding and exposure to robots before exploring views on future usage, ethical guidance and trust in more depth. The findings from 43 respondents can be summarised under the above themes.

Finding 1: There was a general apathy towards existing technology
Existing technology such as Alexa, Google Home and other domestic robots such as Roomba (robotic hoover) had only been experienced by just over 30% of respondents with the remainder citing cost (35%) dislike (21%) and lack of trust/privacy issues as the reasons for lack of engagement, with one participant stating: "I don't see a tremendous amount that these devices could do to improve our family life at the moment; certainly not enough to justify the cost and personal data implications. Also, my husband thinks that IoT [Internet of Things] tech is ridiculous and doesn't want it in the house." Finding 2: 'We just want more time and less drudgery' Whilst the above quote was representative of current interest, 50% of those surveyed were at least slightly positive about the increase in Artificial Intelligence in the home with the major two benefits cited being: time saving and reduction of domestic labour.

Finding 3: Privacy and informational harm are major concerns.
When asked to state two fears from the introduction of domestic robots almost 80% of participants cite concerns around covert listening/privacy/hacking. This is indicative of both the innate understanding of respondents to the issue of digital security and of the fearful response to new technologies (Bartlett, 2018).

b. Future Thinking
The challenge for the domestic robot industry will become managing the privacy tradeoff for consumers. The responses made clear that people were aware of their privacy being traded for the benefits conveyed by AI/domestic robots but striking that balance where the robot's duties outweigh the loss of privacy has yet to be achieved in the majority of respondent's opinions.
Given a choice of tasks for future domestic robots to perform, respondents were fairly evenly spread with cleaning being the leading role, as the table below shows: Although, medical/caring roles were also mentioned by 20% as a role that should not be performed (the dominant example here was for childcare/parenting). Echoing finding 2 above, the thinking of a domestic robot appears to be that of a domestic servant from the turn of the last century, performing menial, tedious household duties with little or no freedom to do anything else.

Finding 4: People are wary of conveying legal rights on domestic robots
Only 7% of respondents felt that domestic robots should have any rights protected by law although the sentience of the robot was recognised largely as the deciding factor. This implies that most respondents would be comfortable with robots having legal rights if they had a level of self-awareness which potentially opens a whole new legal challenge, to legislate for, essentially, a new species. But on the surface this rating places robots beneath animals in terms of legal rights and protections.

Finding 5: People want independence in ethical controls
As the table above shows, perhaps unsurprisingly, respondents want an independent body to be responsible for providing the ethical framework for any domestic robots. What was possibly more interesting was that government were the second most approved controller, perhaps demonstrating an inherent trust in governmental control (or perhaps showing a lack of trust in other options). An interesting comment that backed this up is: 'I don't trust companies to regulate themselves at all, not under capitalism where they aim to make profits, see Uber evading police controls, Volkswagen messing about with the Diesel fumes, Facebook not caring about people abusing data mined from their service -it's a mess.' In further exploration, the need for regulation and control of ethics was largely recognised but there was a broadly held concern that any one body should not have responsibility for ethical control on their own. It was clear that a lot of thought was given to this question as the following response represents: 'I think it should be an independent body which has no benefits coming from the nature of the regulations. I wouldn't trust the Government (think of the Snowden case), the Manufacturer or Seller as each one of these are interested in some sort of power (political, economic etc.)  sufficiently mainstream yet. However, the form of robot links back to discussion of robot rights, and more broadly, robot personhood, which is an ongoing debate in the EU (European Parliament, 2017) and wider academic circles (Darling, 2016;Schafer, 2016). As such personhood would also enable responsibility to be passed from roboticist to robot this remains a contested point (Delcker, 2018).

c. Trust & Interaction
Finding 7: We do not implicitly trust robot technology.
Over 95% of respondents either do not implicitly trust (51%) or are unsure (44%) about trusting domestic robots. A general feeling that machines can fail, errors can be made in programming and they are susceptible to hacking accurately replicates the lack of trust highlighted earlier. Furthermore, the physical form of the robot does not greatly influence feelings of trust with over 51% still distrusting them.

Finding 8: Robots may benefit the socially isolated
Surprisingly, despite the lack of trust and recognition that domestic robots are only machines, almost three quarters of respondents felt that robots could help with feelings of social isolation. Thoughts were that vocal communication and 'conversation' would be of benefit. The ability to talk to something in a similar way people do to their pets but with the added advantage of an appropriate response seem to be broadly considered of benefit to the isolated. Furthermore, the physical form of the robot was felt to make a difference by 58% of respondents with human like robots considered to gain better response for 'feeling more human'.

Unpacking the Findings
Space precludes consideration of all survey findings in detail, so we focus on finding 3, 6 and 7 in more detail, as these are prominent challenges in wider robotics literature.
The issue of trust is a recurrent theme we see across our analysis and particularly in Finding 7. Trust is key because, as Holder et al argue, "user acceptance will be critical to uptake [of robots] and acceptance will be based on trust" (Holder, Khurana, Harrison, & Jacobs, 2016, p. 384). Accordingly, we explore various aspects of trust below.
Human Robot Interaction and Trust: To deal address the shift from industrial robots to domestic robots that can "communicate with environment, follow human social norms, and mimic human abilities." (Haidegger et al., 2013(Haidegger et al., , p. 1216, better understanding of how we live and design them is needed. The field of Human-Robot Interaction (HRI) has emerged, "dedicated to understanding, designing, and evaluating robotic systems for use by or with humans " (Goodrich & Schultz, 2007, p. 204). (Mataric, 2007, pp. 285-286) set out a comprehensive list of human robot interaction orientated challenges, similar to those outlined above, around safety, privacy, attachment, trust. Attachment is interesting for the domestic environment, as users become attached to their robots. "Roomba users already refuse to have their Roombas replaced when they need repair, insisting on getting the same one back. What happens when the robot is much more interesting, intelligent, and engaging than the Roomba?" (Mataric, 2007, pp. 285-286) Not all users are so attached, and from an interactional perspective, the line between trustworthiness and distrust can be tenuous (Mataric, 2007, pp. 285-286). Whilst (Wagner, 2009), shows that studies indicate humans tend to trust and confide in robots, in contrast, Pagallo argues "personal and/or domestic robots will raise a number of psychological issues concerning feelings of subordination, attachment, trustworthiness, etc." (Pagallo, 2013, p. 502). Similarly, (Holder et al., 2016) found that people have become more sceptical of robots as the technology advanced and its capability increased. Hence, trust in human robot interactions has to deal with the legacy that it is normally formed between humans, but as humans & robots co-exist, metrics for trust need to adapt as "the change in a user's perception of a robot from simply being a technology to being a social actor." (Moran, Bachour, & Nishida, 2015, p. 2) Trust and Robot Form: One basis for trust is the form of the robot, ranging from nonhumanoid (e.g. Roomba) to humanoid (e.g. Aeolus) or ethereal interactive AI (e.g. Alexa). We see this point in Finding 6. Some robots may utilise more human attributes in their relationships with users "which can help increase the perceptions of anthropomorphism, including facial features, physical expressiveness, emotions and personality." (Moran et al., 2015, p. 1). Similarly, affective robots have abilities to "[recognize] and [synthesize] emotional cues and response but are still largely incapable of emotional reasoning" (Sullins, 2012, p. 399). However, given the possible emotional connection between human and robot, human psychology can be exploited and user behaviour manipulated (Darling, 2016). Hence, there legal and design-based protections for vulnerable users who could be adversely influenced are necessary.

Law and Trust:
The law can support trust in robots by ensuring they are safe and respect privacy. With the legal approach, it will set an equal playing field in the market while regulating and protecting consumers by supporting "trust in brands, trust in functions, trust in privacy, trust in a fair market." (Holder et al., 2016, p. 384) Safety -Currently, there is a lack of legislation governing robot safety. For example, Directive 93/42/EEC concerning medical devices (as amended by Directive 2007/47/EC) ("Medical Device Directive") and Directive 90/385/EEC on active implantable medical devices ("AIMDD") only apply to care robots in dealing with medicine but not care robots with other functions. Instead, we need to turn to standards, such as ISO 13482, which plugs this gap. As care robots inherently deal with vulnerable populations, appropriate regulation (Holder et al., 2016, p. 389), is necessary, especially given the multitude of contexts domestic robots may live in. Accordingly, whilst design can address some challenges, ensuring legal frameworks that do exist are applicable is vital to protecting user interests.
Privacy and Data Protection -As Finding 3 states, privacy is a big concern. With domestic robots, privacy risks are amplified as they are within the intimate setting of the home, collecting sensitive data from users longitudinally, and profiling their behaviour over time to provide contextually appropriate services. New European Data Protection frameworks, such as the General Data Protection Regulation (GDPR) and proposed ePrivacy Regulation, provide compliance requirements. This includes problematic requirements such as around data portability (Article 5(2) GDPR; (Lachlan Urquhart, Sailaja, & McAuley, 2017)), accountability (Article 20 GDPR; (Lachlan Urquhart, Lodge, & Crabtree, 2018)) and the right to be forgotten (Article 22 GDPR). As in many areas of IT regulation, the fast pace of technological change and slow legal landscape means there is an increasing turn to design as a regulatory tool (Lessig, 2006;Lachlan Urquhart, 2017). Law and policy concepts like privacy by design and default (PbD -Article 25 GDPR) and security by design (Article 32 GDPR) provide the mandate for ensuring personal data driven technologies embed safeguards from the beginning, not after a breach occurs. Supporting how best roboticists can do PbD in practice requires extra thought, as it does for other developers (Hadar et al., 2017;Luger, Urquhart, Rodden, & Golembewski, 2015). As Mataric recognises 'Privacy has to be taken seriously when the robot is designed, not after the fact, as a result of users' complaints' (Mataric, 2007, pp. 285-286).Navigating, the interface between HRI practitioners and researchers and law will be critical, as it is already for HCI and law(Lachlan Urquhart & Rodden, 2017).

Transparency & Control:
Linked to data protection, is questions of transparency and control. The degree of agency a robot has is a big concern as this impacts the degree of uncertainty and ability to control its actions. Oversight of autonomous decisions, and how these are made accountable to users is as much a design issue as it is a legal one (Edwards & Veale, 2017). It is predicted that eventually robots will achieve the level of autonomy where "they themselves become the data controller and responsible for compliance with data privacy legislation" (Holder et al., 2016, p. 395), a prediction is also supported by Pagallo, (Pagallo, 2013, p. 502). However, for now, we need to focus on establishing and operationalising the responsibility of roboticists to their users, and in particular, protecting their legal rights. Translation between legal frameworks and design guidelines is important for this (L. Urquhart, 2014).

D. Conclusions
We have argued that the growth of smart homes is paving the way for domestic robots. There are a multitude of existing challenges around robotics that need to be dealt with. Our findings from the short survey were numerous, but we focused in more depth on the impact of the form of robots, privacy and many aspects of trust. We argue that, given the current pitfalls being experienced with emergent smart homes, there is a responsibility on roboticists to learn from these mistakes and design such robots in legally, socially and ethically responsible ways. A key dimension of this is the need to design technologies after engaging with, understanding and respecting needs of users. Whilst there are commitments to many high level ethical principles emerging in new codes of conduct for roboticists, these need to be situated and operationalised. The current focus in HCI on values in design is one approach to doing this. Similarly, the turn in law to design for regulation means there is similar drive to consider end user interests and rights within the design process. One of the key lessons from human computer interaction is around the importance of understanding user values and perspectives when deploying technologies in the home. If the roboticists creating domestic robots ensure they engage with end user interests, there is a chance they can emerge in a more responsible manner.

Funding:
The research benefitted from the activities undertaken in: the "Moral-IT: Enabling Design of Ethical & Legal IT Systems" project as part of the Horizon Digital Economy Research Institute (EPSRC Grant EP/M02315X/1); RCUK Horizon Centre for Doctoral Training (EPSRC Grant EP/G037574/1).