近来,我无意中听到自己两岁的女儿与亚马逊(Amazon)的语音助手Alexa对话。期间有两件事震惊了我:第一,女儿无法分辨机械音与人声的区别;第二,无论从哪种社会习俗的角度来看,她给Alexa发布指令的方式都很无礼。 我突然意识到,Alexa给我女儿树立了一个糟糕的榜样——女性应该卑顺、忍受无礼的行为并待在家里,这让我感到十分困扰。 四大家居人工智能(AI)助手——Alexa、苹果(Apple)的Siri、谷歌(Google)的Google Assistant和微软(Microsoft)的Cortana——默认使用的全都是女声。在最近一次抵制之前,它们设定的温柔顺从的人格也存在着过分的性别歧视。我女儿的经历引发了我对这种微妙的AI性别歧视的关注,我担心情况会很快变得更糟。 全球最大的那些技术平台今年推出的新服务,将很快与消费者之间进行交流并形成标准。为了高效率地处理数百万条信息,这些公司必须推出自己的AI助手。因此,助手机器人的类型很快就会增加到几千种,并在网页、应用和社交网络上与数十亿用户进行交流。 随着这种“交流型AI”的用途迅速增加,它隐含的性别歧视会在我们、包括我们孩子的世界中迅速发酵。微妙的暗示通过不断重复,会产生累加的效果,随着时间的推移,形成一种病态的心理状况。如今,它正在悄悄影响我们,因为我们使用助手机器人的时间还相对较短,可能每天只有几分钟。不过,随着它们开始完全取代网页和应用,AI很快就会变得普遍许多。 如果我们不做出改变,编写现有的性别歧视算法和脚本的这批人将会创造出下一代对话型AI,而它们的规模则会以指数形式增长。随着对话型AI在全球推广,那些让AI系统把女性定位为厨娘和秘书,把男性定位为高管的工程师,将会把他们的偏见成倍放大。 一切的问题在于男性。如今的AI主要由白人男性工程师开发,他们工作过于匆忙,无暇拷问自己的大男子主义或思考自己的工作可能带来什么危害。我从1995年起担任科技公司的首席执行官,过去这20多年来,在网页、搜索和社交革命中,这样的情况我曾有所目睹。AI革命才起步不久,但全球却有一半的人口已经遭到了边缘化。这真是我们的耻辱。 或者我应该说,这又一次成为了我们的耻辱。科技界不断发生着连环犯罪。收入排行前20的美国科技公司中,有18家的首席执行官都是男性。Facebook、谷歌和微软的工程师中只有五分之一是女性。而在AI领域,2017年业内专家的重要年度会议——会议神经信息处理系统大会(Neural Information Processing Systems, NIPS)上,83%的出席者是男性,同年NIPS论文的作者中则有90%是男性。(由于AI领域相对较新,我们还无法获取有关性别多样性的更宏观的数据。) 如果女性无法参与,我们怎么能够开发出持久且影响深远的AI技术?我们尚处于起步阶段,但迹象却已经令人担忧。放任下去,恐怕会产生灾难性的后果。 为了避免对话型AI引发的灾难,我们公司正在积极采取措施,纠正技术人员的男性偏见,其中一种重要的方式就是在开发助手机器人时与核心员工和编码人员保持密切联系。根据美国劳工部(Labor Department)的数据,美国客户服务代表中有65%是女性,这个团队的多样性要优于撰写代码的程序员,女性占比也远高于大型科技公司中女性程序员的平均水平。 研究AI的公司应该努力平衡员工的性别,与女性领袖合作,减少男性的偏见,并主持由女性领头的科研计划。我们需要在研发助手机器人上提出一套最好的方案,并在整个行业内推广。 AI具有巨大的潜力,然而取得真正进步所需的洞察力和包容性,我们尚未具备,无法让该领域自己承担责任。要抵御隐含偏见的重复和增强,最好的方法就是保持团队性别的多样性。做不到这一点,AI很快就会孕育出科技界的下一个危机。(BT365的网址是多少) 作者罗伯特·洛卡西奥是LivePerson的创始人和首席执行官。 译者:严匡正? |
I recently overheard my 2-year-old daughter talking to Amazon’s voice assistant Alexa, and two things struck me. First, she doesn’t distinguish the disembodied voice from that of a regular human. Second, she barks orders at Alexa in a way that would be considered rude by any social convention. I was suddenly aware and troubled that Alexa is setting a terrible example for my daughter—that women are subservient, should accept rudeness, and belong in the home. All four of the major in-home artificial intelligence, or AI, assistants—Alexa, Apple’s Siri, Google Assistant, and Microsoft’s Cortana—speak by default with a female voice. Until a recent backlash, they also had docile, obedient personalities that would tolerate an exorbitant amount of sexism. The experience with my daughter opened my eyes to this subtle AI sexism, and I’m afraid it will soon get even worse. The world’s largest tech platforms have this year launched new services that will quickly make texting between consumers and brands the norm. To handle the millions of messages efficiently, these companies will have to launch their own AI assistants. As a result, the number of assistant bots will quickly expand into the thousands, communicating with billions of consumers across websites, apps, and social networks. As this “conversational AI” dramatically grows in usage, its sexism could get baked into the world around us, including that of our kids. Subtle reinforcement through repetition can add up, over time, to a form of problematic psychological conditioning. Today, this is quietly creeping up on us because the use of bots is still relatively low—a few minutes per day, perhaps. But soon AI will be much more ubiquitous, as bots start to replace websites and apps completely. If we don’t change course, this next generation of conversational AI will be created by the same people who built the current sexist algorithms and scripts—but on an exponentially bigger scale. The engineers whose AI systems categorized women into kitchen and secretarial roles while offering men jobs with executive titles will have their biases massively amplified, as conversational AI goes global. The common thread is men. The AI of today was developed by predominantly white male engineers in too much of a hurry to challenge their own chauvinism or consider the harm their work could do. As a tech company CEO since 1995, it’s a pattern I’ve seen before, during the web, search, and social revolutions of the past 20-plus years. The AI revolution started only recently, but it’s already marginalized half of the world’s population. Shame on us. Or, I should say, shame on us again. The technology industry is a serial offender. Of the 20 largest U.S. technology firms by revenue, 18 have male CEOs. Only one in five engineers at Facebook, Google, and Microsoft are women. In AI specifically, 83% of attendees at the 2017 main annual gathering of AI experts, the Neural Information Processing Systems (NIPS) conference, were men, as were 90% of NIPS paper authors that year. (Wide-scale statistics on gender diversity in AI, which is relatively new as a specific sector, are not yet available.) How can we build lasting and far-reaching AI technology if women are missing from the equation? We’re just getting started, but the signs are already worrying. Left unchecked, the results could be catastrophic. To avert a disaster in conversational AI, one important antidote to techie male bias that we are pursuing aggressively in our company is to engage contact center staff alongside coders in building the bots. Customer service representatives—who are 65% female in the U.S., per the Labor Department—are a more diverse group than the programmers who write code, and far above the average number of female engineers at the big tech companies. Companies working in AI should work to recruit more balanced workforces, partner with female leaders to reduce male bias, and host women-led tech initiatives. We need to develop a set of best practices in bot building and spread them across the industry. AI has huge potential, but until the field begins to hold itself accountable, we’ll continue to miss the perspective and inclusivity we need for true progress. Diversity is our best defense against replicating and amplifying hidden biases. Without it, AI will soon birth the next crisis in the technology industry. Robert LoCascio is the founder and CEO of LivePerson. |
最新文章