订阅

多平台阅读

微信订阅

杂志

申请纸刊赠阅

订阅每日电邮

移动应用

商业 - 科技

人工智能太容易学坏,该怎么办?

Jonathan Vanian 2018年07月12日

人工智能可以模仿并强化人类决策,同时也放大人类的偏见。科技巨头能否解决大数据歧视问题?

Illustration by Giacomo Carmagnola

2016年3月微软推出Tay时,非常看好这款人工智能支持的“聊天机器人”。跟人们之前在电商网站上接触过的文字聊天程序一样,Tay也可以回答文字问题,从而在推特和其他社交媒体上与公众交流。

但Tay功能更强大,不仅能回答事实性问题,还可以进行更复杂的交流,即加入了情感因素。Tay能表现出幽默感,像朋友一样跟用户说笑。设计者特地让Tay模仿十几岁少女的俏皮口吻。如果推特的用户问Tay父母是谁,她可能回答说:“哦,是微软实验室的一群科学家。按你们的概念里他们就是我父母。”如果有人问Tay过得怎样,她还可能吐槽说:“天呐,今天可累死我了。”

最有趣的一点是,随着与越来越多人交谈,Tay问答时会越发熟练。宣传材料中提到:“你跟Tay聊得越多,她就越聪明,体验也会个人化。”简单点说,Tay具有人工智能最重要的特点,即随时间越来越聪明,越来越高效,提供的帮助也越来越大。

但没人想到网络喷子的破坏性如此之强。

发现Tay会学习模仿交流对象的话之后,网上一些心怀恶意的人聊天时故意说一些种族主义、歧视同性恋等攻击言论。没过几个小时,Tay在推特账号上已是脏话连篇,而且全部公开。“主持人瑞奇·杰维斯向无神论者阿道夫·希特勒学习了极权主义。”Tay在一条推文里说,像极了推特上专事造谣诽谤的假新闻。如果问Tay怎么看时任总统奥巴马,她会说奥巴马像猴子。如果问她大屠杀事件,她会说没发生过。

没到一天,Tay已经从友好的邻家女孩变成满口脏话的小太妹。上线不到24小时,微软就宣布下线产品并公开道歉。

微软研究团队完全没想到事情会如此转折,也令人惊讶。“系统上线时,我们并没有想到进入现实世界会怎样。”微软研究和人工智能总监艾瑞克·霍维茨近日接受采访时告诉《财富》杂志。

Tay项目崩溃之后,霍维茨迅速让高级团队研究“自然语言处理”项目,也是Tay对话核心功能,寻找问题根源。团队成员迅速发现,与聊天程序相关的最佳基本行为遭到忽视。在Tay之前更基础版本的软件里,经常有屏蔽不良表述的协议,但并没有保护措施限制Tay可能学习发散的数据。

霍维茨认为,现在他终于可以“坦然分析”Tay案例,这已经变成微软发展过程中的重要教训。如今微软在全球推出成熟得多的社交聊天机器人,包括印度的Ruuh、日本和印度尼西亚的Rinna。在美国市场,微软推出了Tay的姊妹聊天软件Zo。有些则跟苹果的Siri和亚马逊的Alexa一样,进化到通过语音交互。中国市场的聊天机器人叫小冰,已经开始“主持”电视节目,向便利店顾客发送购物建议。

然而这次微软明显谨慎许多。霍维茨解释说,现在机器人推出比较慢,而且会认真观察软件发展过程中与大众互动情况。不过微软也清醒地意识到,即便人工智能技术在两年里可能发展迅速,但管理机器人行为的工作永无止境。微软员工一直在监视导致聊天机器人行为变化的对话。此类对话也不断出现。举例来说,Zo上线头几个月里就遇到各种状况,调整又调整,Zo曾经叫微软旗舰产品Windows软件“间谍软件”,还说伊斯兰教经典《古兰经》“非常暴力”。

当然了,未来机器人并不会像Tay和Zo一样。这些都是相对原始的程序,只是各项研究里比较花哨的部分,可从中一窥人工智能可能达到的程度。从软件的缺陷能看出,哪怕只部分应用人工智能,软件的能力和潜在隐患都会放大。虽然商业世界已经准备好未来更广泛应用人工智能,现在软件存在问题也意味着更多潜在风险,让技术人员寝食难安。

“做好最完善的准备,然后希望纰漏越来越少。”霍维茨表示。随着各公司将人工智能提升到重要战略地位,如何确保万无一失就非常紧迫。

几乎所有人都相信,当前我们在企业人工智能大爆发前夜。研究公司IDC预计,到2021年,企业每年将在人工智能相关产品上花费522亿美元。经济学家和分析师都认为,相关投资届时可以实现数十亿美元的成本节约和收益。其中一些收益将来自岗位压缩,更多则来自产品与客户、药品与病人,解决方案与问题之间的高效匹配。咨询公司普华永道就预计,到2030年,人工智能可为全球经济贡献多达15.7万亿美元,比现在中国和印度的总产值加起来还多。

人工智能技术之所以流行,主要因为“深度学习”技术推进。利用深度学习之后,企业可以在网络中输入大量信息,迅速识别模式,而且耗费人工培训的时间减少(最终很可能无需培训)。Facebook、谷歌、微软、亚马逊和IBM等巨头都已在产品上应用深度学习技术。举例来说,苹果的Siri和谷歌的语音助手Assistant应用深度学习技术后,可在用户说话之后识别并回应。亚马逊主要利用深度学习直观检查大量通过杂货店派送的产品。

不久的将来,各种规模的公司都会希望通过应用深度学习软件挖掘数据,寻找人眼很难发现的宝贝。人们希望出现人工智能系统扫描数千张X光图像,从而更迅速发现疾病;或自动筛选多份简历,为焦头烂额的人力资源员工节省时间。在科技主义者的设想中,公司可以用人工智能筛选过去多年的数据,更好地预测下一次大卖的机会。药业巨头可以削减研发畅销药的时间。而汽车保险公司也能扫描记录数万亿字节的事故报告,实现自动索赔等。

尽管人工智能支持系统潜力巨大,但也有黑暗一面。首先,系统决策水平受到人类提供数据限制。开发者虽然不断学习,用来培训深度学习系统的数据却并不中立。数据很容易体现出开发者的偏见,不管有意还是无意。有时数据还会受历史影响,形成的趋势和模式体现出持续数百年的歧视观点。成熟的算法扫描历史数据库后可能得出结论,白人男性最有可能当上首席执行官。算法却意识不到,如果不是白人男性几乎没机会当上首席执行官,情况直到最近才有改变。无视偏见是人工智能技术的一项根本缺陷,虽然高管和工程师在谈起该问题时极为谨慎,也都说得比较官方,但很明显他们都很重视这一问题。

当前应用的强大算法“没有为所谓公平进行优化,”加州大学伯克利分校副教授迪尔德丽·穆里根表示,她主要研究技术伦理。“只存在为完成某项任务优化。”人工智能以前所未有的速度将数据转化为决策,但穆里根表示,科学家和伦理学家发现很多情况下“数据并不公平”。

让问题更加复杂的是,深度学习比之前应用的传统算法更加复杂,即便让经验最丰富的程序员理解人工智能系统做出某项决策的逻辑都十分困难。在Tay的例子里,人工智能产品不断发生变化,开发者已无法理解也无法预测为何出现某些行为。由于系统的开发者和用户都在拼命保密数据和算法,而且担心专利技术泄露导致竞争受损,外部监测机构也很难发现系统里存在什么问题。

类似装在黑匣子里的秘密技术已在不少公司和政府部门应用,让很多研究者和活跃人士非常担心。“这些可不是现成的软件,可以随便买来,然后说‘啊,现在终于能在家完成会计工作了。’”微软首席研究员兼纽约大学AI NOW研究所联合负责人凯特·克劳福德表示。“这些都是非常先进的系统,而且会影响核心社会部门。”

虽然猛一下可能想不起,但大多人还是经历过至少一次人工智能崩溃案例:2016年美国大选前期,Facebook的新闻推送中出现了假新闻。

社交媒体巨头Facebook和数据科学家并没有编造故事。新闻信息流的开发机制并不会区分“真”和“假”,只会根据用户个人口味推动个性化内容。Facebook没公开算法具体信息(也涉及专利问题),但承认计算时会参考其他近似口味用户阅读和分享的内容。结果是:由于适合流传的假新闻不断出现,好友们又喜欢看,数百万人的新闻信息流里都出现了假新闻。

Facebook的例子说明个人选择与人工智能发生恶性互动的情况,但研究者更担心深度学习阅读并误读整体数据。博士后提米特·葛布鲁曾在微软等公司研究算法伦理,她对深度学习影响保险市场的方式很担心,因为在保险市场上人工智能与数据结合后可能导致少数群体受到不公待遇。举个例子,想象有一组汽车事故索赔数据。数据显示市中心交通事故率比较高,由于人口密集车祸也多。市中心居住的少数群体人数比例也相对更高。

如果深度学习软件里嵌入了相关联系再筛选数据,可能“发现”少数族裔与车祸之间存在联系,还可能对少数族裔司机贴上某种标签。简单来说,保险人工智能可能出现种族偏见。如果系统通过回顾市中心附近车祸现场的照片和视频进一步“培训”,人工智能更有可能得出结论认为,在涉及多名司机的事故中,少数族裔司机过错可能更大。系统还可能建议向少数族裔司机收取更高保费,不管之前驾驶记录如何。

要指出一点,保险公司都声称不会因为种族区别对待或收取不同保费。但对市中心交通事故的假设显示,看似中立的数据(交通事故发生地点)也可能被人工智能系统吸收并解读,从而导致新的不平等(算法根据具体民族向少数族裔收取更高保费,不管居住地点在哪)。

此外,葛布鲁指出,由于深度学习系统决策基于层层叠叠的数据,人工智能软件决策时工程师都不明白其中原因和机制。“这些都是我们之前没想过的,因为人类刚刚开始发现基础算法里存在的偏见。”她表示。

当代人工智能软件与早期软件不同之处在于,现在的系统“可以独立作出具有法律意义的决策,”马特·谢尔勒表示,他在门德尔松律师事务所担任劳动及就业律师,对人工智能颇有研究。谢尔勒开始研究该领域时发现关键结果出台过程中没有人类参与,他很担心。如果由于数据存在纰漏,深度学习指导下的X光忽视一位超重男性体内的肿瘤,有人负责么?“有没有人从法律角度看待这些问题?”谢尔勒问自己。

随着科技巨头们准备将深度学习技术嵌入其客户商业软件,上述问题便从学术界所讨论的“假如”命题成为了急需考虑的事情。2016年,也就是Tay出现问题的那一年,微软组建了一个名为Aether(“工程,研究中的人工智能和道德”的首字母缩写)的内部机构,由艾瑞克·霍维茨担任主席。这是一个跨学科部门,由工程、研究、政策和法律团队的成员构成,机器学习偏见是其重点研究的议题之一。霍维茨在描述该部门所讨论的一些话题时若有所思地说:“微软对于面部识别之类的软件是否应该用于敏感领域是否已经有了定论,例如刑事审判和监管。人工智能技术是否已经足够成熟,并用于这一领域,亦或由于失败率依然非常高,因此人们不得不慎而又慎地思考失败带来的代价?”

杰奎因·奎诺内罗·坎德拉是Facebook应用机器学习部门的负责人,该部门负责为公司打造人工智能技术。在众多其他的功能当中,Facebook使用人工智能技术来筛除用户新闻推送中的垃圾信息。公司还使用这一技术,根据用户喜好来提供故事和贴文,而这也让坎德拉的团队几近陷入假新闻危机。坎德拉将人工智能称之为“历史加速器”,因为该技术“能够让我们打造优秀的工具,从而提升我们的决策能力。”但是他也承认,“正是在决策的过程中,大量的伦理问题接踵而至。”

Facebook在新闻推送领域遇到的难题说明,一旦产品已经根植于人工智能系统,要解决伦理问题是异常困难的。微软也曾通过在算法应忽略的术语黑名单中添加一些侮辱性词语或种族绰号,推出了Tay这个相对简单的系统。但此举无法帮助系统分辨“真”、“假”命题,因为其中涉及众多的主观判断。Facebook的举措则是引入人类调解员来审查新闻信息(例如通过剔除来源于经常发布可证实虚假新闻信息来源的文章),但此举让公司吃上了审查机构的官司。如今,Facebook所建议的一个举措只不过是减少新闻推送中显示的新闻数量,转而突出婴儿照和毕业照,可谓是以退为进。

这一挑战的关键之处在于:科技公司所面临的两难境地并不在于创建算法或聘请员工来监视整个过程,而是在于人性本身。真正的问题并不在于技术或管理,而是关乎哲学。伯克利伦理学教授迪尔德丽·穆里根指出,计算机科学家很难将“公平”编入软件,因为公平的意义会因人群的不同而发生变化。穆里根还指出,社会对于公平的认知会随着时间的变化而改变。而且对于大家广泛接受的理想状态的“公平”理念,也就是社会决策应体现社会每位成员的意志,历史数据存在缺陷和缺失的可能性尤为突出。

微软Aether部门的一个思想实验便揭示了这一难题。在这个实验中,人工智能技术对大量的求职者进行了筛选,以挑选出适合高管职务的最佳人选。编程人员可以命令人工智能软件扫描公司最佳员工的性格特征。虽然结果与公司的历史息息相关,但很有可能所有的最佳雇员,当然还有所有最高级别的高管,都是白人。人们也有可能会忽视这样一种可能性,公司在历史上仅提拔白人(大多数公司在前几十年中都是这样做的),或公司的文化便是如此,即少数族群或女性会有被公司冷落的感受,并在得到提升之前离开公司。

任何了解公司历史的人都知晓这些缺陷,但是大多数算法并不知道。霍维茨称,如果人们利用人工智能来自动推荐工作的话,那么“此举可能会放大社会中人们并不怎么引以为荣的一些偏见行为”,而且是不可避免的。

谷歌云计算部门的人工智能首席科学家李飞飞表示,技术偏见“如人类文明一样由来已久”,而且存在于诸如剪刀这种普通的事物当中。她解释说:“数个世纪以来,剪刀都是由右撇子的人设计的,而且使用它的人大多都是右撇子。直到有人发现了这一偏见之后,才意识到人们有必要设计供左撇子使用的剪刀。” 全球人口仅有约10%是左撇子,作为人类的一种天性,占主导地位的多数人群往往会忽视少数人群的感受。

事实证明,人工智能系统最近所犯的其他最为明显的过错也存在同样的问题。我们可以看看俄罗斯科学家利用人工智能系统在2016年开展的选美大赛。为参加竞赛,全球数千名人士提交了其自拍照,期间,计算机将根据人们脸部对称性等因素来评价其美貌。

然而,在机器选出的44名优胜者当中,仅有一位是深色皮肤。这一结果让全球一片哗然,竞赛举办方随后将计算机的这一明显偏见归咎于用于培训电脑的数据组,因为这些数据组中的有色人种照片并不多。计算机最终忽视了那些深色皮肤人种的照片,并认为那些浅肤色的人种更加漂亮,因为他们代表着多数人群。

这种因忽视而造成的偏见在深度学习系统中尤为普遍,在这些系统中,图片识别是培训过程的重要组成部分。麻省理工大学媒体实验室的乔伊·布沃拉姆维尼最近与微软研究员葛布鲁合作,撰写了一篇研究性别分辨技术的论文,这些技术来自于微软、IBM和中国的旷视科技。他们发现,这些技术在识别浅肤色男性照片时的精确度比识别深肤色女性更高。

此类算法空白在线上选美比赛中看起来可能是微不足道的事情,但葛布鲁指出,此类技术可能会被用于更加高风险的场景。葛布鲁说:“试想一下,如果一辆自动驾驶汽车在看到黑人后无法识别,会出现什么后果。想必后果是非常可怕的。”

葛布鲁-布沃拉姆维尼的论文激起了不小的浪花。微软和IBM均表示,公司已采取针对性的措施来完善其图片识别技术。尽管这两家公司拒绝透露其举措的详情,但正在应对这一问题的其他公司则让我们窥见了如何利用科技来规避偏见。

当亚马逊在部署用于筛除腐烂水果的算法时,公司必须解决抽样偏见问题。人们会通过研究大量的图片数据库来培训视觉辨认算法,其目的通常是为了识别,例如,草莓“本应”具有的模样。然而,正如你所预料的那样,与完好浆果光鲜亮丽的照片相比,腐烂的浆果相对较为稀少。而且与人类不同的是,机器学习算法倾向于不计算或忽视它们,而人类的大脑则倾向于注意这些异常群体,并对其做出强烈反应。

亚马逊的人工智能总监拉尔夫·荷布里奇解释道,作为调整,这位在线零售巨头正在测试一项名为过采样的计算机科学技术。机器学习工程师可通过向未充分代表的数据分配更大的统计学“权重”,来主导算法的学习方式。在上述案例中便是腐烂水果的照片。结果显示,培训后的算法更为关注变质食物,而不是数据库中可能建议的食品关联性。

荷布里奇指出,过采样也可被应用于学习人类的算法(然而他拒绝透露亚马逊在这一领域的具体案例)。荷布里奇说:“年龄、性别、种族、国籍,这些都是人们特别需要测试采样偏见的领域,以便在今后将其融入算法。”为了确保用于识别照片人脸面部所使用的算法并不会歧视或忽视有色、老龄或超重人士,人们可以为此类个人的照片增加权重,以弥补数据组所存在的缺陷。

其他工程师正在专注于进一步“追根溯源”——确保用于培训算法的基本数据(甚至在其部署之前)具有包容性,且没有任何偏见。例如,在图形识别领域,在录入计算机之前,人们有必要对用于培训深度学习系统的数百万图片进行审核和标记。数据培训初创企业iMerit首席执行官雷德哈·巴苏解释道,公司遍布于全球的1400多名训练有素的员工会代表其客户,以能够规避偏见的方式对照片进行标记。该公司的客户包括Getty Images和eBay。

巴苏拒绝透露这种标记方式是否适合标记人像图片,但她介绍了其他的案例。iMerit在印度的员工可能会觉得咖喱菜不是很辣,而公司位于新奥尔良的员工可能会认为同样的菜“很辣”。iMerit会确保这两项信息均被录入这道菜照片的标记中,因为仅录入其中的一个信息会让数据的精确性打折扣。在组建有关婚姻的数据集时,iMerit将收录传统的西式白婚纱和多层蛋糕图片,同时还会收录印度或非洲精心策划、色彩绚丽的婚礼。

iMerit的员工以一种不同的方式在业界脱颖而出。巴苏指出:公司会聘用拥有博士学位的员工,以及那些受教育程度不高、较为贫困的人群,公司53%的员工都是女性。这一比例能够确保公司在数据标记过程中获得尽可能多的观点。巴苏表示,“良好的伦理政策不仅仅包含隐私和安全,还涉及偏见以及我们是否遗漏了某个观点。”而找出这个遗漏的观点已被更多科技公司提上了战略议程。例如,谷歌在6月宣布,公司将在今年晚些时候于加纳的阿格拉开设人工智能研究中心。两位谷歌工程师在一篇博文上写道:“人工智能在为世界带来积极影响方面有着巨大的潜力,如果在开发新人工智能技术时能够得到全球各地人士的不同观点,那么这一潜力将更大。”

人工智能专家还认为,他们可以通过让美国从事人工智能行业的员工更加多元化,来应对偏见,而多元化问题一直是大型科技公司的一个障碍。谷歌高管李飞飞最近与他人共同创建了非营利性机构AI4ALL,以面向女孩、妇女和少数群体普及人工智能技术和教育。该公司的活动包括一个夏令营计划,参与者将到访顶级大学的人工智能部门,与导师和模范人物建立联系。总之,AI4ALL执行董事苔丝·波斯内表示:“多样性的提升有助于规避偏见风险。”

然而,在这一代更加多元化的人工智能研究人员进入劳动力市场数年之前,大型科技公司便已然将深度学习能力融入其产品中。而且即便顶级研究人员越发意识到该技术的缺陷,并承认他们无法预知这些缺陷会以什么样的方式展现出来,但他们认为人工智能技术在社会和金融方面的效益,值得他们继续向前迈进。

Facebook高管坎德拉说:“我认为人们天生便对这种技术的前景持乐观态度。” 他还表示,几乎任何数字技术都可能遭到滥用,但他同时也指出:“我并不希望回到上个世纪50年代,体验当时落后的技术,然后说:‘不,我们不能部署这些技术,因为它们可能会被用于不良用途。’”

微软研究负责人霍维茨表示,像Aether团队这样的部门将帮助公司在潜在的偏见问题对公众造成负面影响之前便消除这些偏见。他说:“我认为,在某项技术做好投入使用的准备之前,没有人会急着把它推向市场。”他还表示,相比而言,他更关心“不作为所带来的伦理影响。”他认为,人工智能可能会降低医院中可预防的医疗失误。霍维茨询问道:“你的意思是说,你对我的系统偶尔出现的些许偏见问题感到担忧吗?如果我们可以通过X光拍片解决问题并拯救众多生命,但依然不去使用X光,伦理何在?”

监督部门的反映是:说说你所做的工作。提升人工智能黑盒系统所录入数据的透明度和公开度,有助于研究人员更快地发现偏见,并更加迅速地解决问题。当一个不透明的算法可以决定某个人是否能获得保险,或该人是否会蹲监狱时,麻省理工大学研究人员布沃拉姆维尼说道:“非常重要的一点在于,我们必须严谨地去测试这些系统,而且需要确保一定的透明度。”

确实,很少有人依然持有“人工智能绝对可靠”的观点,这是一个进步。谷歌前任人工智能公共政策高管蒂姆·黄指出,在互联网时代初期,科技公司可能会说,他们“只不过是一个代表数据的平台而已”。如今,“这一理念已经没有市场”。(BT365的网址是多少)

本文最初发表于《财富》杂志2018年7月1日刊。

译者:冯丰

审校:夏林

WHEN TAY MADE HER DEBUT in March 2016, Microsoft had high hopes for the artificial intelligence–powered “social chatbot.” Like the automated, text-based chat programs that many people had already encountered on e-commerce sites and in customer service conversations, Tay could answer written questions; by doing so on Twitter and other social media, she could engage with the masses.

But rather than simply doling out facts, Tay was engineered to converse in a more sophisticated way—one that had an emotional dimension. She would be able to show a sense of humor, to banter with people like a friend. Her creators had even engineered her to talk like a wisecracking teenage girl. When Twitter users asked Tay who her parents were, she might respond, “Oh a team of scientists in a Microsoft lab. They’re what u would call my parents.” If someone asked her how her day had been, she could quip, “omg totes exhausted.”

Best of all, Tay was supposed to get better at speaking and responding as more people engaged with her. As her promotional material said, “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” In low-stakes form, Tay was supposed to exhibit one of the most important features of true A.I.—the ability to get smarter, more effective, and more helpful over time.

But nobody predicted the attack of the trolls.

Realizing that Tay would learn and mimic speech from the people she engaged with, malicious pranksters across the web deluged her Twitter feed with racist, homophobic, and otherwise offensive comments. Within hours, Tay began spitting out her own vile lines on Twitter, in full public view. “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism,” Tay said, in one tweet that convincingly imitated the defamatory, fake-news spirit of Twitter at its worst. Quiz her about then-president Obama, and she’d compare him to a monkey. Ask her about the Holocaust, and she’d deny it occurred.

In less than a day, Tay’s rhetoric went from family-friendly to foulmouthed; fewer than 24 hours after her debut, Microsoft took her offline and apologized for the public debacle.

What was just as striking was that the wrong turn caught Microsoft’s research arm off guard. “When the system went out there, we didn’t plan for how it was going to perform in the open world,” Microsoft’s managing director of research and artificial intelligence, Eric Horvitz, told Fortune in a recent interview.

After Tay’s meltdown, Horvitz immediately asked his senior team working on “natural language processing”—the function central to Tay’s conversations—to figure out what went wrong. The staff quickly determined that basic best practices related to chatbots were overlooked. In programs that were more rudimentary than Tay, there were usually protocols that blacklisted offensive words, but there were no safeguards to limit the type of data Tay would absorb and build on.

Today, Horvitz contends, he can “love the example” of Tay—a humbling moment that Microsoft could learn from. Microsoft now deploys far more sophisticated social chatbots around the world, including Ruuh in India, and Rinna in Japan and Indonesia. In the U.S., Tay has been succeeded by a social-bot sister, Zo. Some are now voice-based, the way Apple’s Siri or Amazon’s Alexa are. In China, a chatbot called Xiaoice is already “hosting” TV shows and sending chatty shopping tips to convenience store customers.

Still, the company is treading carefully. It rolls the bots out slowly, Horvitz explains, and closely monitors how they are behaving with the public as they scale. But it’s sobering to realize that, even though A.I. tech has improved exponentially in the intervening two years, the work of policing the bots’ behavior never ends. The company’s staff constantly monitors the dialogue for any changes in its behavior. And those changes keep coming. In its early months, for example, Zo had to be tweaked and tweaked again after separate incidents in which it referred to Microsoft’s flagship Windows software as “spyware” and called the Koran, Islam’s foundational text, “very violent.”

To be sure, Tay and Zo are not our future robot overlords. They’re relatively primitive programs occupying the parlor-trick end of the research spectrum, cartoon shadows of what A.I. can accomplish. But their flaws highlight both the power and the potential pitfalls of software imbued with even a sliver of artificial intelligence. And they exemplify more insidious dangers that are keeping technologists awake at night, even as the business world prepares to entrust ever more of its future to this revolutionary new technology.

“You get your best practices in place, and hopefully those things will get more and more rare,” Horvitz says. With A.I. rising to the top of every company’s tech wish list, figuring out those practices has never been more urgent.

FEW DISPUTE that we’re on the verge of a corporate A.I. gold rush. By 2021, research firm IDC predicts, organizations will spend $52.2 billion annually on A.I.-related products—and economists and analysts believe they’ll realize many billions more in savings and gains from that investment. Some of that bounty will come from the reduction in human headcount, but far more will come from enormous efficiencies in matching product to customer, drug to patient, solution to problem. Consultancy PwC estimates that A.I. could contribute up to $15.7 trillion to the global economy in 2030, more than the combined output of China and India today.

The A.I. renaissance has been driven in part by advances in “deep-learning” technology. With deep learning, companies feed their computer networks enormous amounts of information so that they recognize patterns more quickly, and with less coaching (and eventually, perhaps, no coaching) from humans. Facebook, Google, Microsoft, Amazon, and IBM are among the giants already using deep-learning tech in their products. Apple’s Siri and Google Assistant, for example, recognize and respond to your voice because of deep learning. Amazon uses deep learning to help it visually screen tons of produce that it delivers via its grocery service.

And in the near future, companies of every size hope to use deep-learning-powered software to mine their data and find gems buried too deep for meager human eyes to spot. They envision A.I.-driven systems that can scan thousands of radiology images to more quickly detect illnesses, or screen multitudes of résumés to save time for beleaguered human resources staff. In a technologist’s utopia, businesses could use A.I. to sift through years of data to better predict their next big sale, a pharmaceutical giant could cut down the time it takes to discover a blockbuster drug, or auto insurers could scan terabytes of car accidents and automate claims.

But for all their enormous potential, A.I.-powered systems have a dark side. Their decisions are only as good as the data that humans feed them. As their builders are learning, the data used to train deep-learning systems isn’t neutral. It can easily reflect the biases—conscious and unconscious—of the people who assemble it. And sometimes data can be slanted by history, encoding trends and patterns that reflect centuries-old discrimination. A sophisticated algorithm can scan a historical database and conclude that white men are the most likely to succeed as CEOs; it can’t be programmed (yet) to recognize that, until very recently, people who weren’t white men seldom got the chance to be CEOs. Blindness to bias is a fundamental flaw in this technology, and while executives and engineers speak about it only in the most careful and diplomatic terms, there’s no doubt it’s high on their agenda.

The most powerful algorithms being used today “haven’t been optimized for any definition of fairness,” says Deirdre Mulligan, an associate professor at the University of California at Berkeley who studies ethics in technology. “They have been optimized to do a task.” A.I. converts data into decisions with unprecedented speed—but what scientists and ethicists are learning, Mulligan says, is that in many cases “the data isn’t fair.”

Adding to the conundrum is that deep learning is much more complex than the conventional algorithms that are its predecessors—making it trickier for even the most sophisticated programmers to understand exactly how an A.I. system makes any given choice. Like Tay, A.I. products can morph to behave in ways that its creators don’t intend and can’t anticipate. And because the creators and users of these systems religiously guard the privacy of their data and algorithms, citing competitive concerns about proprietary technology, it’s hard for external watchdogs to determine what problems could be embedded in any given system.

The fact that tech that includes these black-box mysteries is being productized and pitched to companies and governments has more than a few researchers and activists deeply concerned. “These systems are not just off-the-shelf software that you can buy and say, ‘Oh, now I can do accounting at home,’ ” says Kate Crawford, principal researcher at Microsoft and codirector of the AI Now Institute at New York University. “These are very advanced systems that are going to be influencing our core social institutions.”

THOUGH THEY MAY not think of it as such, most people are familiar with at least one A.I. breakdown: the spread of fake news on Facebook’s ubiquitous News Feed in the run-up to the 2016 U.S. presidential election.

The social media giant and its data scientists didn’t create flat-out false stories. But the algorithms powering the News Feed weren’t designed to filter “false” from “true”; they were intended to promote content personalized to a user’s individual taste. While the company doesn’t disclose much about its algorithms (again, they’re proprietary), it has acknowledged that the calculus involves identifying stories that other users of similar tastes are reading and sharing. The result: Thanks to an endless series of what were essentially popularity contests, millions of people’s personal News Feeds were populated with fake news primarily because their peers liked it.

While Facebook offers an example of how individual choices can interact toxically with A.I., researchers worry more about how deep learning could read, and misread, collective data. Timnit Gebru, a postdoctoral researcher who has studied the ethics of algorithms at Microsoft and elsewhere, says she’s concerned about how deep learning might affect the insurance market—a place where the interaction of A.I. and data could put minority groups at a disadvantage. Imagine, for example, a data set about auto accident claims. The data shows that accidents are more likely to take place in inner cities, where densely packed populations create more opportunities for fender benders. Inner cities also tend to have disproportionately high numbers of minorities among their residents.

A deep-learning program, sifting through data in which these correlations were embedded, could “learn” that there was a relationship between belonging to a minority and having car accidents, and could build that lesson into its assumptions about all drivers of color. In essence, that insurance A.I. would develop a racial bias. And that bias could get stronger if, for example, the system were to be further “trained” by reviewing photos and video from accidents in inner-city neighborhoods. In theory, the A.I. would become more likely to conclude that a minority driver is at fault in a crash involving multiple drivers. And it’s more likely to recommend charging a minority driver higher premiums, regardless of her record.

It should be noted that insurers say they do not discriminate or assign rates based on race. But the inner-city hypothetical shows how data that seems neutral (facts about where car accidents happen) can be absorbed and interpreted by an A.I. system in ways that create new disadvantages (algorithms that charge higher prices to minorities, regardless of where they live, based on their race).

What’s more, Gebru notes, given the layers upon layers of data that go into a deep-learning system’s decision-making, A.I.-enabled software could make decisions like this without engineers realizing how or why. “These are things we haven’t even thought about, because we are just starting to uncover biases in the most rudimentary algorithms,” she says.

What distinguishes modern A.I.-powered software from earlier generations is that today’s systems “have the ability to make legally significant decisions on their own,” says Matt Scherer, a labor and employment lawyer at Littler Mendelson who specializes in A.I. The idea of not having a human in the loop to make the call about key outcomes alarmed Scherer when he started studying the field. If flawed data leads a deep-learning-powered X-ray to miss an overweight man’s tumor, is anyone responsible? “Is anyone looking at the legal implications of these things?” Scherer asks himself.

AS BIG TECH PREPARES to embed deep-learning technology in commercial software for customers, questions like this are moving from the academic “what if?” realm to the front burner. In 2016, the year of the Tay misadventure, Microsoft created an internal group called Aether, which stands for AI and Ethics in Engineering and Research, chaired by Eric Horvitz. It’s a cross-disciplinary group, drawing representatives from engineering, research, policy, and legal teams, and machine-learning bias is one of its top areas of discussion. “Does Microsoft have a viewpoint on whether, for example, face-recognition software should be applied in sensitive areas like criminal justice and policing?” Horvitz muses, describing some of the topics the group is discussing. “Is the A.I. technology good enough to be used in this area, or will the failure rates be high enough where there has to be a sensitive, deep consideration for the costs of the failures?

Joaquin Qui?onero Candela leads Facebook’s Applied Machine Learning group, which is responsible for creating the company’s A.I. technologies. Among many other functions, Facebook uses A.I. to weed spam out of people’s News Feeds. It also uses the technology to help serve stories and posts tailored to their interests—putting Candela’s team adjacent to the fake-news crisis. Candela calls A.I. “an accelerator of history,” in that the technology is “allowing us to build amazing tools that augment our ability to make decisions.” But as he acknowledges, “It is in decision-making that a lot of ethical questions come into play.”

Facebook’s struggles with its News Feed show how difficult it can be to address ethical questions once an A.I. system is already powering a product. Microsoft was able to tweak a relatively simple system like Tay by adding profanities or racial epithets to a blacklist of terms that its algorithm should ignore. But such an approach wouldn’t work when trying to separate “false” from “true”—there are too many judgment calls involved. Facebook’s efforts to bring in human moderators to vet news stories—by, say, excluding articles from sources that frequently published verifiable falsehoods—exposed the company to charges of censorship. Today, one of Facebook’s proposed remedies is to simply show less news in the News Feed and instead highlight baby pictures and graduation photos—a winning-by-retreating approach.

Therein lies the heart of the challenge: The dilemma for tech companies isn’t so much a matter of tweaking an algorithm or hiring humans to babysit it; rather, it’s about human nature itself. The real issue isn’t technical or even managerial—it’s philosophical. Deirdre Mulligan, the Berkeley ethics professor, notes that it’s difficult for computer scientists to codify fairness into software, given that fairness can mean different things to different people. Mulligan also points out that society’s conception of fairness can change over time. And when it comes to one widely shared ideal of fairness—namely, that everybody in a society ought to be represented in that society’s decisions—historical data is particularly likely to be flawed and incomplete.

One of the Microsoft Aether group’s thought experiments illustrates the conundrum. It involves A.I. tech that sifts through a big corpus of job applicants to pick out the perfect candidate for a top executive position. Programmers could instruct the A.I. software to scan the characteristics of a company’s best performers. Depending on the company’s history, it might well turn out that all of the best performers—and certainly all the highest ranking executives—were white males. This might overlook the possibility that the company had a history of promoting only white men (for generations, most companies did), or has a culture in which minorities or women feel unwelcome and leave before they rise.

Anyone who knows anything about corporate history would recognize these flaws—but most algorithms wouldn’t. If A.I. were to automate job recommendations, Horvitz says, there’s always a chance that it can “amplify biases in society that we may not be proud of.”

FEI-FEI LI, the chief scientist for A.I. for Google’s cloud-computing unit, says that bias in technology “is as old as human civilization”—and can be found in a lowly pair of scissors. “For centuries, scissors were designed by right-handed people, used by mostly right-handed people,” she explains. “It took someone to recognize that bias and recognize the need to create scissors for lefthanded people.” Only about 10% of the world’s people are left-handed—and it’s human nature for members of the dominant majority to be oblivious to the experiences of other groups.

That same dynamic, it turns out, is present in some of A.I.’s other most notable recent blunders. Consider the A.I.-powered beauty contest that Russian scientists conducted in 2016. Thousands of people worldwide submitted selfies for a contest in which computers would judge their beauty based on factors like the symmetry of their faces.

But of the 44 winners the machines chose, only one had dark skin. An international ruckus ensued, and the contest’s operators later attributed the apparent bigotry of the computers on the fact that the data sets they used to train them did not contain many photos of people of color. The computers essentially ignored photos of people with dark skin and deemed those with lighter skin more “beautiful” because they represented the majority.

This bias-through-omission turns out to be particularly pervasive in deep-learning systems in which image recognition is a major part of the training process. Joy Buolamwini, a researcher at the MIT Media Lab, recently collaborated with Gebru, the Microsoft researcher, on a paper studying gender-recognition technologies from Microsoft, IBM, and China’s Megvii. They found that the tech consistently made more accurate identifications of subjects with photos of lighter-skinned men than with those of darker-skinned women.

Such algorithmic gaps may seem trivial in an online beauty contest, but Gebru points out that such technology can be used in much more high-stakes situations. “Imagine a selfdriving car that doesn’t recognize when it ‘sees’ black people,” Gebru says. “That could have dire consequences.”

The Gebru-Buolamwini paper is making waves. Both Microsoft and IBM have said they have taken actions to improve their image-recognition technologies in response to the audit. While those two companies declined to be specific about the steps they were taking, other companies that are tackling the problem offer a glimpse of what tech can do to mitigate bias.

When Amazon started deploying algorithms to weed out rotten fruit, it needed to work around a sampling-bias problem. Visual-recognition algorithms are typically trained to figure out what, say, strawberries are “supposed” to look like by studying a huge database of images. But pictures of rotten berries, as you might expect, are relatively rare compared with glamour shots of the good stuff. And unlike humans, whose brains tend to notice and react strongly to “outliers,” machine-learning algorithms tend to discount or ignore them.

To adjust, explains Ralf Herbrich, Amazon’s director of artificial intelligence, the online retail giant is testing a computer science technique called oversampling. Machine-learning engineers can direct how the algorithm learns by assigning heavier statistical “weights” to underrepresented data, in this case the pictures of the rotting fruit. The result is that the algorithm ends up being trained to pay more attention to spoiled food than that food’s prevalence in the data library might suggest.

Herbrich points out that oversampling can be applied to algorithms that study humans too (though he declined to cite specific examples of how Amazon does so). “Age, gender, race, nationality—they are all dimensions that you specifically have to test the sampling biases for in order to inform the algorithm over time,” Herbrich says. To make sure that an algorithm used to recognize faces in photos didn’t discriminate against or ignore people of color, or older people, or overweight people, you could add weight to photos of such individuals to make up for the shortage in your data set.

Other engineers are focusing further “upstream”—making sure that the underlying data used to train algorithms is inclusive and free of bias, before it’s even deployed. In image recognition, for example, the millions of images used to train deep-learning systems need to be examined and labeled before they are fed to computers. Radha Basu, the CEO of data-training startup iMerit, whose clients include Getty Images and eBay, explains that the company’s staff of over 1,400 worldwide is trained to label photos on behalf of its customers in ways that can mitigate bias.

Basu declined to discuss how that might play out when labeling people, but she offered other analogies. iMerit staff in India may consider a curry dish to be “mild,” while the company’s staff in New Orleans may describe the same meal as “spicy.” iMerit would make sure both terms appear in the label for a photo of that dish, because to label it as only one or the other would be to build an inaccuracy into the data. Assembling a data set about weddings, iMerit would include traditional Western white-dress-and-layer-cake images—but also shots from elaborate, more colorful weddings in India or Africa.

iMerit’s staff stands out in a different way, Basu notes: It includes people with Ph.D.s, but also less-educated people who struggled with poverty, and 53% of the staff are women. The mix ensures that as many viewpoints as possible are involved in the data labeling process. “Good ethics does not just involve privacy and security,” Basu says. “It’s about bias, it’s about, Are we missing a viewpoint?” Tracking down that viewpoint is becoming part of more tech companies’ strategic agendas. Google, for example, announced in June that it would open an A.I. research center later this year in Accra, Ghana. “A.I. has great potential to positively impact the world, and more so if the world is well represented in the development of new A.I. technologies,” two Google engineers wrote in a blog post.

A.I. insiders also believe they can fight bias by making their workforces in the U.S. more diverse—always a hurdle for Big Tech. Fei-Fei Li, the Google executive, recently cofounded the nonprofit AI4ALL to promote A.I. technologies and education among girls and women and in minority communities. The group’s activities include a summer program in which campers visit top university A.I. departments to develop relationships with mentors and role models. The bottom line, says AI4ALL executive director Tess Posner: “You are going to mitigate risks of bias if you have more diversity.”

YEARS BEFORE this more diverse generation of A.I. researchers reaches the job market, however,big tech companies will have further imbued their products with deep-learning capabilities. And even as top researchers increasingly recognize the technology’s flaws—and acknowledge that they can’t predict how those flaws will play out—they argue that the potential benefits, social and financial, justify moving forward.

“I think there’s a natural optimism about what technology can do,” says Candela, the Facebook executive. Almost any digital tech can be abused, he says, but adds, “I wouldn’t want to go back to the technology state we had in the 1950s and say, ‘No, let’s not deploy these things because they can be used wrong.’ ”

Horvitz, the Microsoft research chief, says he’s confident that groups like his Aether team will help companies solve potential bias problems before they cause trouble in public. “I don’t think anybody’s rushing to ship things that aren’t ready to be used,” he says. If anything, he adds, he’s more concerned about “the ethical implications of not doing something.” He invokes the possibility that A.I. could reduce preventable medical error in hospitals. “You’re telling me you’d be worried that my system [showed] a little bit of bias once in a while?” Horvitz asks. “What are the ethics of not doing X when you could’ve solved a problem with X and saved many, many lives?”

The watchdogs’ response boils down to: Show us your work. More transparency and openness about the data that goes into A.I.’s black-box systems will help researchers spot bias faster and solve problems more quickly. When an opaque algorithm could determine whether a person can get insurance, or whether that person goes to prison, says Buolamwini, the MIT researcher, “it’s really important that we are testing these systems rigorously, that there are some levels of transparency.”

Indeed, it’s a sign of progress that few people still buy the idea that A.I. will be infallible. In the web’s early days, notes Tim Hwang, a former Google public policy executive for A.I. who now directs the Harvard-MIT Ethics and Governance of Artificial Intelligence initiative, technology companies could say they are “just a platform that represents the data.” Today, “society is no longer willing to accept that.”

This article originally appeared in the July 1, 2018 issue of Fortune.

我来点评

  最新文章

最新文章:

500强情报中心

财富专栏