ue4 启用ui

支持AI的UI的设计原则 (Design principles for AI-enabled UI)

In my previous articles, I described the design process for AI-enabled user interfaces at Deloitte’s impact foundation project ‘Social Robot Alice.’ I also discovered four baseline design principles to keep in mind when designing for AI-enabled user interfaces:

在我以前的文章中,我在德勤的影响力基础项目“社交机器人爱丽丝”中描述了支持AI的用户界面的设计过程 。 我还发现了在设计具有AI功能的用户界面时要记住的四个基线设计原则 :

1. discovery and expectation management;

1.发现和期望管理;

2. design for forgiveness;

2.原谅的设计;

3. data transparency and tailoring, and

3.数据透明度和定制,以及

4. privacy, security, and control.

4.隐私,安全和控制。

With this article, I cover how we implemented these design principles in an AI-enabled voice user interface (VUI). (Read about the design principles here). ​ This article clarifies the motives behind chosen design decisions and uncovers other design considerations when designing for an AI-enabled ​voice user interface such as ‘Social robot Alice.’

通过本文,我将介绍如何在支持AI- 语音用户界面(VUI) 来实现这些设计原则 ( 在这里阅读有关设计原理的信息 )。 这篇文章阐明了动机选择的设计决策和揭露其他设计考虑后面当设计启用了AI- 语音用户界面 ,如“社会机器人爱丽丝”。

案例:“社交机器人爱丽丝” (Case: ‘Social robot Alice’)

‘Social Robot Alice’ is a VUI that acts as a social buddy for senior adults facing loneliness, an initiative by VU University, and Deloitte. The robot has two primary users: the caretaker (user A) and the senior adult (user B). The caretaker (user A) configures each robot to be personalized for each senior adult (user B). (Read more about the project here).

“社交机器人爱丽丝”是VUI,它是面对孤独感的老年人的社交伙伴,这是VU大学和德勤发起的一项举措。 机器人有两个主要用户:看护者(用户A)和老年人(用户B)。 管理员(用户A)将每个机器人配置为针对每个成年人(用户B)进行个性化设置。 (在此处阅读有关该项目的更多信息)。

User A: Caretaker, User B: Senior adult | Image: naimavanesch
用户A:看守,用户B:成人| 图片:naimavanesch

1.发现和期望管理-很好地设置用户期望,以避免错误的期望。 (1. Discovery and expectation management — Set user expectations well to avoid false expectations.)

1.1用户应了解该工具可以做什么和不能做什么 (1.1 Users should be aware of what the tool can, and cannot do)

Clear and focused personalized functionalities and affordances: ​Affordances are what the environment offers the individual (Gibson, 1966). Clearly state what Alice has to offer to the caretaker and senior adult. What makes Alice unique is that she is entirely customized and configured by the caretaker for a specific senior adult. The robot knows the user’s daily schedule, phone numbers of friends and family, their positive memories and subjects they like to talk about, and their favorite songs. The clear functionalities set user expectations and avoid false expectations.

明确且有针对性的个性化功能和赠品:“赠款是环境为个人提供的东西(Gibson,1966年)。 明确说明爱丽丝必须提供给看护人和成年成年人的东西。 爱丽丝之所以与众不同,是因为她完全由看管人定制并配置为特定的高级成年人。 机器人知道用户的日程安排,朋友和家人的电话号码,他们的积极记忆和他们喜欢谈论的主题以及他们喜欢的歌曲。 清晰的功能可设定用户期望值,并避免错误的期望值。

Media material: ​By creating product manuals and videos, the functionalities and affordances are communicated to the user to set user expectations.

媒体材料:通过创建产品手册和视频,将功能和赠品传达给用户以设置用户期望。

Onboarding with a trusted person to trigger copying behavior:​ The aim is for the senior adult to create an affective bond with the robot. According to professor Konijn of VU University (2016), a medium (like the robot) needs to be relevant to the user to create an affective bond. Thus, by having a familiar, trusted person showing the senior adult the personalized, relevant functionalities and affordances, the senior adult copies the behavior and knows what they can expect. Research indicates that senior adults start creating an affective bond with the robot after three sessions interacting with the robot. The longer the period of a session (>6 hours), the more the senior user enjoyed the presence of the robot.

与受信任的人一起入职以触发复制行为:旨在使老年人与机器人建立情感纽带。 根据VU大学的Konijn教授(2016)的说法,一种媒介(如机器人)需要与用户相关才能建立情感纽带。 因此,通过让熟悉的受信任的人向老年人显示个性化的相关功能和能力,老年人可以复制行为并知道他们可以期望什么。 研究表明,在与机器人互动三个环节之后,老年人开始与机器人建立情感纽带。 会话时间(> 6小时)越长,高级用户越会喜欢机器人的存在。

Visual cues: ​We embedded an RGB light depicting listening (blinking red light), talking (flashing blue light), and processing (blinking yellow) so the user knows what is happening via visual cues.

视觉提示:我们嵌入了RGB灯,用于描述听(闪烁的红光),讲话(闪烁的蓝光)和处理(闪烁的黄光),以便用户通过视觉提示了解发生了什么。

1.2用户应该从最少的投入中受益 (1.2 Users should expect a benefit from minimal input)

Connecting systems (robot, app, dashboard): ​Caretakers have a schedule for day-to-day tasks and activities for their clients. With the app, caretakers enter the schedule in the application. Caretakers are already used to creating schedules for their senior adults, and therefore this is ‘minimal input.’ The app connects to ‘Social robot Alice,’ and Alice handles the rest.

连接系统(机器人,应用程序,仪表板):守卫者为客户制定了日常任务和活动的时间表。 使用该应用程序,看护人可以在应用程序中输入计划。 看护者已经习惯于为其高龄成年人制定时间表,因此这是“最少的投入”。 该应用程序连接到“社交机器人爱丽丝”,其余由爱丽丝处理。

The app that controls Social Robot Alice (Alice Illustration by Yun Fu)
控制社交机器人爱丽丝的应用(爱丽丝·云芙的插图)

Feedback loop: ​Alice and the app connect to a dashboard that helps caretakers and designers to train Alice and can give recommendations and predictions on how to further personalize Alice for the user based on the activity log.

反馈循环:爱丽丝和应用程序连接到仪表板,该仪表板可帮助看护者和设计师培训爱丽丝,并可以根据活动日志为用户提供进一步的个性化建议和建议。

1.3为未发现的意外使用做好准备 (1.3 Prepare for undiscovered and unexpected usage)

Survival pilot: ​ In the last few months, we ran a survival pilot with Alice. Alice spent full days with the elderly for us to check whether she can survive a full day. During survival pilot tests, we observe the robot interacting with participants, and we discovered unexpected usage. These insights, in turn, were translated into user stories and requirements for the development team.

生存飞行员:在过去的几个月中,我们与爱丽丝一起进行了生存飞行员。 爱丽丝与老人一起度过了整整一天,以检查她是否可以存活一整天。 在生存飞行员测试期间,我们观察到机器人与参与者的互动,并且发现了意外的用途。 这些见解又转化为用户故事和对开发团队的要求。

Motion sensor:​ During user tests, we found that the participants treat the robot as an actual entity (likely due to the affective bond). When they want the robot’s attention, they wave their hand in front of her eyes. Based on this discovery, we implemented a motion sensor near Alice’s eyes to recognize the senior adult’s waving behavior so that the robot can ‘turn on’ and give the user its attention.

运动传感器:在用户测试期间,我们发现参与者将机器人视为真实实体(可能是由于情感纽带)。 当他们想要机器人的注意力时,他们将手在她的眼前挥舞。 基于这一发现,我们在爱丽丝的眼睛附近安装了一个运动传感器,以识别老年人的挥手行为,从而使机器人可以“打开”并给予用户注意。

Presence: ​Another unexpected finding is that the robot’s presence itself is already comforting enough and makes them feel less alone. The more lonely a user is, the more likely they are to accept the robot. We measured the degree of loneliness before and after user test sessions.

●临场感另一个意外发现是机器人的临场感本身已经足够舒适,使他们感到不那么孤独。 用户越孤独,他们接受机器人的可能性就越大。 我们测量了用户测试会话前后的孤独程度。

1.4向用户宣传意外情况 (1.4 Educate the user about the unexpected)

Repeating to create awareness: ​The robot often repeats that she is six years old. By repeating this, it manages user expectations that the robot is still young and might show unexpected behavior like children do (e.g., suddenly say or do things out of the ordinary).

重复以提高意识:机器人经常重复说自己已经六岁了。 通过重复此操作,它可以管理用户对机器人还很年轻的期望,并可能表现出像孩子一样的意想不到的行为(例如,突然说出或做些不寻常的事情)。

2.原谅的设计-AI会犯错误。 设计UI,以便用户倾向于原谅。 (2. Design for forgiveness — The AI will make mistakes. Design the UI, so users are inclined to forgive it.)

2.1设计工具时,用户会在出现错误时原谅它 (2.1 Design the tool in a way that users will forgive it when it makes mistakes)

Personality/ character:​ A way to design for forgiveness is to use a UI that simulates creatures or objects that humans are already naturally inclined to forgive, like children or animals. Therefore we designed the robot to be a 6-year-old curious empathic child. She is happy, can show her 'sad face', is interested in her user, and loves to make her surroundings happy (view personality scales).

个性/性格:设计宽恕的一种方法是使用UI模拟人类已经自然倾向于宽恕的生物或物体,例如儿童或动物。 因此,我们将机器人设计为一个6岁的好奇同情孩子。 她很高兴,可以露出自己的“悲伤的面Kong”,对她的用户感兴趣,并且喜欢使周围的环境变得快乐( 查看个性量表 )。

Exterior:​ The “uncanny valley”-phenomenon, first coined by Japanese roboticist Masahiro Mori (1970), refers to the idea that if a robot resembles a human too much, it can create feelings of disgust toward the robot (Lanteigne, 2019). We designed the current robot exterior together with more than 20 senior adults to create an exterior that’s appealing to the target group. The result is that the robot does not look very advanced (white, matt, simple). Interesting is how their final design does not depict a resemblance of real human likeness. However, there are a few human-like-characteristics like having eyes, eyebrows, and a physical head. This version of the robot fits between a non-human and humanoid (see image below). This would suggest that this exterior would increase the likelihood of a user’s empathy toward the robot.

外观: “不可思议的山谷”现象,最早由日本机器人家森正宏(Masahiro Mori,1970)提出,指的是如果机器人与人的相似度过高 ,就会对机器人产生厌恶感( Lanteigne ,2019 )。 我们与20多个高级成年人一起设计了当前的机器人外观,以创造一种吸引目标人群的外观。 结果是机器人看上去不太高级(白色,亚光,简单)。 有趣的是,它们的最终设计没有描绘出真实的人类相似之处。 但是,有一些类似人的特征,如眼睛,眉毛和身体的头部。 此版本的机器人适合非人类和类人动物(见下图)。 这表明该外观将增加用户对机器人的同情的可能性。

­­

Marsha Wichers (2019) Marsha Wichers (2019)

Also, we discovered that senior adults associate black with death, so we purposely do not embed any shiny futuristic black material. And, we offer a range of cuddly jackets (like teddy jackets, wool knit jackets, and hoodies) to make the robot more approachable and cuddly. An unexpected discovery is that women show interest in knitting clothes for the robot themselves. This shows that the woman presents signs of ‘anthropomorphism.’ This means senior adults tend to assign ‘human characteristics’ to the robot themselves.

此外,我们发现老年人将黑色与死亡联系在一起,因此我们故意不嵌入任何闪亮的未来派黑色材料。 并且,我们提供了一系列可爱的夹克(例如泰迪夹克,羊毛针织夹克和连帽衫),以使机器人更加平易近人和可爱。 一个出乎意料的发现是,女性对为机器人自己编织衣服感兴趣。 这表明该妇女表现出“拟人化”迹象。 这意味着老年人倾向于给机器人自己分配“人性”。

Visual cues: It was a discussion of how, and if we wanted to depict emotions with​ eyes. We experimented with lights, projections, and video screens. After testing, we chose to use .mp4 videos on a screen to communicate emotions through eye animations. This route allows us to iterate and update the design of the eyes easily and quickly. The eyes communicate the robot being unsure before answering questions (if the confidence level of understanding the user is low) to manage expectations.

视觉提示:这是关于如何以及是否要用眼睛描绘情感的讨论。 我们尝试了灯光,投影和视频屏幕。 经过测试后,我们选择在屏幕上使用.mp4视频,以通过眼睛动画传达情感。 此路线使我们可以轻松快速地迭代和更新眼睛的设计。 在回答问题(如果理解用户的置信度低)之前,眼睛会传达不确定的机器人来管理期望。

Write dialog based on the user’s typical way of talking:​ We found that senior adults have a different way of speech due to experiencing and growing up in a different time. Heck, for all people, the way of conversing is different due to upbringing, culture, religion, language, social bonds. All these affect one’s typical way of talking. For us, we learned from the users’ way of speech and trained the robot to recognize intents and commands like how they spoke back inthe day, ​so they feel more connected with the robot.

根据用户的典型讲话方式编写对话 :我们发现老年人由于在不同的时间经历和成长而具有不同的讲话方式。 哎呀,由于所有人的成长,文化,宗教,语言,社会纽带,对于所有人来说,交谈的方式是不同的。 所有这些都会影响一个人的典型讲话方式。 对于我们来说,我们从用户的讲话方式学习和训练机器人识别意图和命令,就像他们 白天怎样说话回来 让他们觉得与机器人更连接。

Verbal language: The robot uses child-like ways of speaking, like using diminutives,​ short sentences, and telling stories in a child-like way. Also, it happens that the robot does not understand commands. For this, we created several fallback algorithms and sentences so the robot can repeat the most likely command and intent for the user to confirm (or not).

口头语言:机器人使用类似于儿童的说话方式,例如使用小量词,简短句子和以类似于儿童的方式讲故事。 另外,机器人可能无法理解命令。 为此,我们创建了多个后备算法和语句,以便机器人可以重复最有可能的命令和意图,以供用户确认(或不确认)。

ExampleUser: Alice, (*muttering*)Robot: (confidence level: 0,0*)Robot: {repeat possible intent=”Do you want me to play music?”) (guessing)User: Yes, I want you to play musicOrUser: No & rephrase+repeat questionIf(confidence level=<0*),then(repeat most likely intent as question)and(user:decision(yes,no),then(execute_or_fallback)*Intent matches have an intent detection confidence value in a range from 0.0 (completely uncertain) to 1.0 (completely certain).

Voice engineering:​ We changed the pitch of the Google voice to resemble a high-tone voice of a child. We changed the pace too. Users act more empathic when the robot has child-like features. We created an algorithm to change intonation, speed, breaks, and timing of words in sentences. We categorized script blocks and sentences and designed an algorithm that predicts which sentences need non-lexical sounds and which emotion to be outputted in the predicted appropriate way.

语音工程 :我们更改了Google语音的音高,使其类似于孩子的高音。 我们也改变了脚步。 当机器人具有类似孩子的功能时,用户会表现出更多的同情心。 我们创建了一种算法来更改句子中单词的语调,速度,中断和时间安排。 我们对脚本块和句子进行了分类,并设计了一种算法,该算法可以预测哪些句子需要非词汇的声音,并以预测的适当方式输出哪种情感。

Example*User calls ticket agency and gets greeted by Voicebot*Voicebot: (introduction&) Would you like to book a flight, cancel, or change your booking?User: My cat just died. So, uuh… I would like to cancel”-- How to respond? --Voicebot: “O. (break=1sec) I’m sorry to hear that. (break=1sec)”Normally, there is always a standard break between sentences after a point. This little break could plant doubt and uncertainty and influence the user in a negative way.[listen to soundfile: unempathic response]Vs.Voicebot: “Oooohhhh-I’m-sorry-to-hear-that!.”(no breaks, gives immediate reassurance)[listen to soundfile: empathic response]Thanks to Phoebe and Maikel for the workshop they gave on voicebranding! voicebranding.nl

Natural language understanding response: You need to take into account the user and context. For example, in the first example, if one’s husband has just passed away, it’s not advisable to respond with trying to cheer them up with: ‘You-have-so-many-friends. It-will-be-alright!+[happy-eyes-emotion]’. Then it's better to respond in the following way: 'Aah-I-understand-you-are-sad. It-must-be-difficult. You-have-had-a-hard-life'+[sad eyes-emotion]. However, this is a difficult scenario and difficult to predict this immediately in the correct way. Therefore we are still analyzing, monitoring, and training Alice to have a high confidence level in her response. Also, it is crucial to take into account the amount of ‘negative sentiment’ over a period of time. If a user has been outputting happy sentiments and sentences for a month, but suddenly is uttering negative sentences, then Alice is programmed to respond happily still. However, if a senior user has been constantly very negative over a month, then the algorithm needs to use this data to respond accordingly. The system signals this negative speech behavior to the caretaker, and the caretaker can give this senior user more human attention.

自然语言理解响应:您需要考虑用户和上下文。 例如,在第一个示例中,如果丈夫刚刚去世,不建议用以下方法使他们振作起来:' 你有很多朋友。 会没事的!+ [happy-eyes-emotion] '。 然后最好以以下方式做出回应: “啊,我明白你很伤心。 它必须是困难的。 您过着艰苦的生活 + [悲伤的眼睛情绪] 。 但是,这是一个困难的情况,并且很难以正确的方式立即进行预测。 因此,我们仍在分析,监视和培训Alice,使其对自己的React具有很高的置信度。 同样,至关重要的是要考虑一段时间内的“负面情绪”。 如果用户已经输出了一个月的快乐情绪和句子,但是突然说出了否定的句子,则可以对Alice进行编程,使其仍然能够愉快地做出响应。 但是,如果一个高级用户在一个月内一直非常负面,则该算法需要使用此数据做出相应的响应。 系统将这种消极的言语举止发送给看护者,看护者可以给这个高级用户更多的注意力。

Example of happy, neutral, and sad user intents and robot’s possible responses
幸福,中立和悲伤的用户意图以及机器人可能的响应示例
ExampleHow to analyze and predict whether Alice needs to respond happy 😃 or sad 😔Option 1: 😃 </> 😃User: I don’t like to live (😔 sad trigger, sentiment score= (< -1 to 0.1)Alice: Ahh, I am so sorry to hear that! You have so much going    
for you! Please don’t think that way. (😃 happy trigger)OROption 2: 😔 </> 😔User: I don’t like to live (😔 sad trigger, sentiment score= (< -1 to 0.1)Alice: Ahh, I understand you are sad. I know it’s hard. You have
had a hard life. (😔 response to sad trigger)*Alice is never really ‘sad’. However, she can respond in a certain way to show empathy and to be comforting.

Listening: We carefully designed rules for how the robot should listen. Also, keeping in​ mind the user, the robot has a long listening duration (>10 sec) to avoid her interrupting her user.

聆听:我们精心设计了机器人应如何聆听的规则。 另外,请牢记用户的注意事项,机器人的聆听时间较长(> 10秒),以避免干扰用户。

Interruption: A eureka moment was when we consciously realized that human conversations don’t ‘beep’ to cue the conversation partner to be allowed to speak. So why should Alice? We removed the beeping signal and designed the robot in a way you can interrupt her whenever (like how people talk over each other). Other than verbally interrupting her, there are hardware options too to stop her, like via the app, the cable, and her on/off button.

打断:一个尤里卡时刻是当我们有意识地意识到人类的对话不会“嘟嘟”地提示对话对方被允许讲话时。 那么,为什么爱丽丝呢? 我们消除了蜂鸣信号,并设计了一种机器人,使您可以随时打断她(例如人们彼此交谈的方式)。 除了口头打扰她外,还有一些硬件选项可以阻止她,例如通过应用程序,电缆和她的开/关按钮。

2.2设计令人愉悦的功能以增加宽恕的可能性 (2.2 Design delightful features to increase the likelihood of forgiveness)

Personalized affordances: ​The robot offers the user personalized affordances for the user to bond with the robot. Affordances are, e.g., being a companion and providing structure to senior adult’s day. One of the functionalities is knowing the user’s favorite songs. The robot triggers songs (configured by the caretaker) during the day, which makes the senior adult visibly happy to the point that they sing along and keep wanting more. Affordances influence willingness to use the robot and also affect how engaged the users are with a character “as a friend” (Van Vugt et al., 2006).

个性化的服务能力:机器人为用户提供个性化的服务能力,供用户与机器人绑定。 支付能力例如是同伴,并为老年人的生活提供结构。 功能之一是了解用户喜欢的歌曲。 机器人在白天触发歌曲(由看守配置),这使老年人明显感到高兴,直到他们唱歌并不断想要更多。 支付能力会影响使用机器人的意愿,还会影响用户与“朋友”角色的互动程度(Van Vugt等,2006)。

2.3设计在没有互联网连接的情况下使用AI的能力 (2.3 Design the ability to use AI without internet connectivity)

Hybrid IT architecture: ​The robot is part of an advanced IT architecture. We carefully determined that functionalities should also work offline and stored locally, and designed fallbacks for those which need internet connectivity to update (e.g., possibility to sync data). The current version of the robot still uses internet connectivity; however, for the next part, the focus will be on having some functionalities only available offline.

混合IT架构:机器人是高级IT架构的一部分。 我们仔细确定了功能也应该脱机工作并存储在本地,并为需要Internet连接进行更新(例如,同步数据的可能性)的用户设计了备用。 当前版本的机器人仍然使用互联网连接; 但是,在下一部分中,重点将放在使某些功能仅可离线使用。

3.数据透明性和定制性-透明地收集数据,并为用户提供定制数据的能力。 (3. Data transparency and tailoring — Be transparent about collecting data and offer users the ability to tailor it.)

3.1 AI应该对用户拥有的数据透明 (3.1 The AI should be transparent in what data it has of the user)

Monitoring data via dashboard: ​The dashboard lets users monitor the AI data and activity log.

通过仪表板监控数据:仪表板使用户可以监控AI数据和活动日志。

They can see which information is stored, like user- and system triggers. Currently, we use this dashboard for research purposes. We categorized ‘alarming intents’ for the robot to recognize. If these alarming intents (e.g., ‘I want to die’) are identified, there will be a signal sent to the caretaker.

他们可以查看存储了哪些信息,例如用户和系统触发器。 目前,我们将此仪表板用于研究目的。 我们对“警报意图”进行了分类,以便机器人识别。 如果识别出这些令人担忧的意图(例如,“我想死”),将有一个信号发送给看守人。

3.2用户应该能够调整AI所学 (3.2 Users should be able to adjust what AI has learned)

Tailoring data via dashboard:​ The AI will make mistakes and will output predictions that the users do not desire. Therefore, besides designing for discovery and forgiveness, the dashboard offers caretakers the option to tailor forecasts to their knowledge by, e.g., adjusting what the AI has learned.

通过仪表板定制数据: AI会犯错误并输出用户不希望的预测。 因此,除了为发现和宽恕而设计之外,仪表板还为看护者提供了通过例如调整AI所学知识来根据其知识量身定制预测的选项。

3.3用户应该能够提供输入,以便AI可以学习 (3.3 Users should be able to provide input so the AI can learn)

Training robot via dashboard: The dashboard drives machine learning. It also offers caretakers the option to determine whether the robot behaved correctly after user triggers. Being a caretaker requires in-depth personal knowledge and experience about the senior adult. The AI tool within the robot still needs the caretaker to understand the context of data. Therefore, the AI — being an agent — gives the option to the caretaker to have the final say/decision.

通过仪表板训练机器人:仪表板可推动机器学习。 它还为看护者提供了确定在用户触发后机器人是否正确运行的选项。 成为看护者需要有关老年人的深入个人知识和经验。 机器人中的AI工具仍然需要看守来理解数据的上下文。 因此,作为代理人的AI可以让看管人拥有最终的发言权/决定权。

Personalization of the robot:

机器人的个性化:

● Choose the character and personality of the robot● Personalize robot by feeding the AI personal information about the user● Change user preferences: Turn on/off functionalities specific for the user, manage favorite songs, manage ‘open questions’ personalized for the user● Change system preferences: wake-words, volume control, response modes, noise reduction, time zones, languages● Customize week/daily schedule and the ability to input own dialog within the robot● Change robot exterior with outfits (e.g., cuddly teddybear jacket)Senior adults react positively surprised when they discover that the social robot has personal knowledge about them. It triggers them in a positive way, and they start reminiscing positive memories. It is a trigger for the senior adult to start a conversation with the robot.

4.隐私,安全性和控制性-通过驱动隐私,安全性和控制AI的能力来获得信任。 (4. Privacy, security, and control — Gain trust by driving privacy, security, and the ability to control the AI.)

4.1为用户设计一流的安全性以信任AI与个人数据 (4.1 Design top-notch security for users to trust AI with personal data)

Robot brain server: The Alice ecosystem envisions a robot brain server (RBS). The RBS is an​ artificial cognitive service system that handles the data, data security, and artificial intelligence that drives Alice; and the code on which her functionalities run (Hoorn, 2018).

机器人大脑服务器: Alice生态系统设想了机器人大脑服务器(RBS)。 RBS是一个人工认知服务系统,处理驱动爱丽丝的数据,数据安全性和人工智能; 以及运行其功能的代码(Hoorn,2018)。

Hybrid encrypted data storage: The data the robot knows about the user is encrypted​ and carefully stored. Data is stored locally on the robot or in the cloud.

混合加密数据存储:机器人知道的有关用户的数据已加密并仔细存储。 数据存储在本地机器人或云中。

Two-factor authentication (2fa):​ The robot, the controlling app, and the monitoring and analysis-dashboard are integrated. Since these systems handle personal, sensitive data about the user, users would not mind the 2fa set-up because it’s for the user’s good. The system must do its very best to protect user’s data.

两要素认证(2fa) :机器人,控制应用程序以及监视和分析仪表板已集成。 由于这些系统处理有关用户的个人敏感数据,因此用户不会介意2fa设置,因为这对用户有利。 系统必须尽力保护用户数据。

Voice recognition: The robot might hear other people who are near the robot​ (caretakers, family members, surrounding people). The robot should only execute commands of recognized controlling users, like the senior adult and their caretaker. The voice recognition is designed for the robot to answer questions of every person talking to her; however, some commands can only be activated by the senior adult or their caretaker. It’s like a child who can speak to anyone, but only executes tasks her parents’ command’ her to do.

语音识别:机器人可能会听到机器人附近的其他人(看护人,家庭成员,周围的人)。 机器人只应执行公认的控制用户的命令,例如老年人及其看守。 语音识别是为机器人设计的,它可以回答每个与她交谈的人的问题; 但是,某些命令只能由高级成年人或其看守来激活。 这就像一个可以和任何人说话的孩子,但是只能执行父母命令“她要做的事情”。

Facial recognition:​ For the roadmap set for this project, we are not focusing on this technology. However, if this does suit the roadmap and vision of a project, then please do consider to set this up accordingly.

面部识别 :对于该项目的路线图,我们不关注此技术。 但是,如果这确实适合项目的路线图和愿景,那么请务必考虑相应地进行设置。

4.2通过提供测试运行来证明承诺的交付 (4.2 Prove delivery on promises by offering test runs)

Affective bonding theory: We found that providing a slow-paced onboarding period is​ crucial for senior adults to create an emotional bond with the robot. Affective disposition theory states that individuals are predisposed to like or dislike particular characters based on those characters’ moral fiber. Characters that behave morally “sane” are liked (Konijn, Hoorn, 2016), and this procedure takes time to determine that feeling. Moreover, according to Konijn & Hoorn (2016), without relevance, no emotion will occur, and thus, no affective bonding can be expected. The robot should, therefore, be of use to the user.

情感联系理论:我们发现提供缓慢的入职时间对于老年人与机器人建立情感联系至关重要。 情感倾向理论指出,根据人物的道德纤维,人们容易喜欢或不喜欢某些人物。 道德上表现为“理智”的角色会受到喜欢(Konijn,Hoorn,2016年),而此过程需要时间来确定这种感觉。 此外,根据Konijn&Hoorn(2016)的研究,没有相关性 ,就不会发生情感,因此,无法预期到情感的结合。 因此,机器人应该对用户有用。

4.3用户干预和接管控制的设计能力 (4.3 Design ability for users to intervene and take over control)

Application intervenience: Immediately be able to change settings via the application. With the​ app, a caretaker can configure changes in triggers, dialog, user preferences, and turn on/off the robot or set it to ‘idle mode.’

应用程序干预:可立即通过应用程序更改设置。 借助该应用程序,看守可以配置触发器,对话框,用户首选项中的更改,以及打开/关闭机器人或将其设置为``空闲模式''。

Exterior intervenience: ​Take control over the robot with the ‘stop’ button to stop the robot’s current action and reset it.

外部干预:使用“停止”按钮控制机器人,以停止机器人的当前动作并将其重置。

Voice command: ​Use voice command ‘stop’ to stop the robot

语音命令:使用语音命令“停止”停止机器人

Hardware:​ Use the speaker buttons to take over control. Also, by adding more (Bluetooth) linked speakers with mic function in the house, the senior adult can control the robot wherever they are in the house. When a house is bigger, more speakers can be added.

硬件:使用扬声器按钮来接管控制权。 此外,通过在房屋中添加更多具有麦克风功能的(蓝牙)链接扬声器,老年人可以在房屋中的任何地方控制机器人。 当房子更大时,可以添加更多扬声器。

4.4 AI应该从用户的干预中学习 (4.4 AI should learn from user’s intervenience)

Interventions via the dashboard and application: ​When a user intervenes and takes control over the AI, the AI learns from this behavior. The application and dashboard monitors and save these interventions, to later check with the caretaker whether these interventions need to be saved for next time.

通过仪表板和应用程序进行干预当用户干预并控制AI时,AI将从这种行为中学习。 应用程序和仪表板将监视并保存这些干预措施,以便以后与看护者检查是否需要下次保存这些干预措施。

4.5未经用户同意,AI不应做任何事情 (4.5 AI should not do anything without the user’s consent)

Control via application: ​In the case of the social robot, the caretaker is the one who needs to configure the robot to execute actions. This is because the actual user, the senior adult, is recognized to tend not to want to bother other people with their needs (like calling with their family). The caretaker is actually ‘deciding (social and entertaining) activities’ for the senior adult via the social robot (like planning calling-time with family members).

通过应用程序进行控制:对于社交机器人,看守者是需要配置机器人以执行动作的人。 这是因为人们认识到实际用户,即成年人,往往不希望他们的需要打扰其他人(例如与家人打来电话)。 看守实际上是通过社交机器人为老年人确定(社交和娱乐)活动(例如计划与家人的通话时间)。

4.6系统应通知用户系统错误 (4.6 The system should notify users of system errors)

Dashboard error log and predictive solutions:​ The system notifies the caretaker on what it needs from the user. The AI within the dashboard offers the user solutions to fix the error.

仪表板错误日志和预测性解决方案:系统会通知看护者用户的需求。 仪表板中的AI为用户提供了解决错误的解决方案。

外卖 (The takeaway)

When designing for AI-enabled VUI, we took into account the design principles and learned from extensive user testing and research before we decided ​how​ we wanted to embed the four design principles in our tool. Hopefully, you can learn from our learnings and motives behind the design decisions.

当设计AI-启用VUI,我们考虑到了设计原理和之前我们决定我们如何想在我们的工具嵌入四个设计原则,从广泛的用户测试和研究的经验教训。 希望您可以从我们的设计决策背后的学习和动机中学习。

At Deloitte Digital | VU University ‘Social Robot Alice’, we continuously learn about AI-enabled social robotics and (voice) user interfaces. One of our design principles is ‘knowledge sharing to create an impact.’ If you are interested in collaborating as a designer, a developer, a caretaker, a care center, a psychologist, a senior adult, or in any other role — please reach out to us via www.alicecares.nl

在Deloitte Digital | VU大学的“社交机器人爱丽丝”,我们不断学习有关支持AI的社交机器人和(语音)用户界面的信息。 我们的设计原则之一是“ 知识共享以创造影响力”。 如果您有兴趣作为设计师,开发商,看护人,护理中心,心理学家,老年人或其他角色进行合作,请通过www.alicecares.nl与我们联系。

感谢AI专家和可爱的团队成员的校对 (Thanks to AI experts and lovely team members for proofreading)

Inge de Jong, Marijn Hagenaar, Jelle Roebroek, Esther Stapel, Franklin Heijnen & Yun Fu

Inge de Jong,Marijn Hagenaar,Jelle Roebroek,Esther Stapel,Franklin Heijnen和Yun Fu

翻译自: https://blog.prototypr.io/how-to-implement-ai-enabled-ui-design-principles-for-voice-4f8622ac4e4b

ue4 启用ui

更多推荐

ue4 启用ui_如何实现语音的AI启用ui设计原则