1. NLU基础

  NLU是Natural Language Understanding的简称,即自然语言理解。一直以来都与NLG(Generation)任务并称为NLP两大主流任务。一般意义上的NLU常指与理解给定句子意思相关的意图识别、实体抽取、指代关系等任务,在智能对话中应用比较广泛。

  具体点来说,当用户输入一句话时,机器人一般会针对该句话(也可以把历史记录给附加上)进行全方面分析,包括:

  • 情感倾向分析:简单来说,一般会包括正向、中性、负向三种类型,也可以设计更多的类别。或更复杂的细粒度情感分析,比如针对其中某个实体或属性的情感,而不是整个句子的。

  • 意图识别:一般都是分类模型,大部分时候都是多分类,但是也有可能是层次分类,或多标签模型。

    • 多分类:给定输入文本,输出为一个Label,但Label的总数有多个。比如类型包括询问地址、询问时间、询问价格、闲聊等等。

    • 层次分类:给定输入文本,输出为层次的Label,也就是从根节点到最终细粒度类别的路径。比如询问地址/询问家庭地址、询问地址/询问公司地址等等。

    • 多标签分类:给定输入文本,输出不定数量的Label,也就是说每个文本可能有多个Label,Label之间是平级关系。

  • 实体和关系抽取:

    • 实体抽取:提取出给定文本中的实体。实体一般指具有特定意义的实词,如人名、地名、作品、品牌等等;很多时候也是业务直接相关的词。

    • 关系抽取:实体之间往往有一定的关系,比如「刘亦菲」出演「天龙八部」,其中「刘亦菲」就是人名、「天龙八部」是作品名,其中的关系就是「出演」,一般会和实体作为三元组来表示。

  一般经过以上这些分析后,机器人就可以对用户的输入有一个比较清晰的理解,便于接下来据此做出响应。

  另外值得一提的是,上面的过程并不一定只用在对话中,只要涉及到用户输入Query需要给出响应的场景,都需要这个NLU的过程,一般也叫Query解析。

  上面提到的几个分析,如果从算法的角度看,其实就两种:

  • 句子级别的分类:如情感分析、意图识别、关系抽取等。也就是给一个句子,给出一个或多个Label。

  • Token级别的分类:如实体抽取、阅读理解(就是给一段文本和一个问题,然后在文本中找到问题的答案)。也就是给一个句子,给出对应实体的位置。

  Token级别的分类不太好理解,我们举个例子,比如下面这句话:

刘亦菲出演天龙八部。

  它在标注的时候是这样的:

刘/B-PER
亦/I-PER
菲/I-PER
出/O
演/O
天/B-WORK
龙/I-WORK
八/I-WORK
部/I-WORK
。/O

  在上面这个例子中,每个Token就是每个字,每个Token会对应一个Label(当然也可以多个),Label中的B表示Begin,I表示Internal,O表示Other(也就是非实体)。模型要做的就是学习这种对应关系,当给出新的文本时,能够给出每个Token的Label预测。

  可以看到也就是说,它们本质上都是分类任务,只是分类的位置或标准不一样。当然了,实际应用中会有各种不同的变化和设计,但整个思路是差不多的,我们并不需要掌握这些知识,只需要知道大概是怎么一回事就好了。

1.1 句子级别的分类

  接下来,我们简单介绍一下这些分类具体是怎么做的,先说句子级别的分类。回忆上一章的Embedding,那可以算是整个DeepLearning NLP的基石,我们这部分的内容也会用到Embedding。具体过程如下:

  • 将给定句子或文本表征成Embedding

  • 将Embedding传入一个神经网络,计算得到不同Label的概率分布

  • 将上一步的Label概率分布与真实的分布做比较,并将误差回传,修改神经网络的参数

  • 得到训练好的神经网络

In [72]:

import numpy as np
np.random.seed(0)

  我们举个例子,简单起见,假设Embedding维度为32维(上一章OpenAI返回的维度比较大):

In [73]:

emd = np.random.normal(0, 1, (1, 32))

  如果我们是三分类,那么最简单的W就是32×3的大小(这个W就被称为模型/参数):

In [74]:

W = np.random.random((32, 3))

In [75]:

z = emd @ W
z

Out[75]:

array([[6.93930177, 5.96232449, 3.96168115]])

  得到这样的结果,因为我们想要的是概率分布,所以需要对其归一化(也就是变成0-1之间的概率值,并且加起来为1)。

In [117]:

def norm(z):
    exp = np.exp(z)
    return exp / np.sum(exp)

In [77]:

y = norm(z)
y

Out[77]:

array([[0.70059356, 0.26373654, 0.0356699 ]])

In [78]:

np.sum(y)

Out[78]:

0.9999999999999999

  根据给出的y,知道这个结果告诉我们预测的Label是0,如果真实的Label是1,那0位置的这个概率分布下次就会变小,1位置的就会变大。

  实际中,W往往更加复杂,可以包含任意的数组,只要最后输出变成1×3的大小即可,比如我们弄个复杂点的:

In [79]:

w1 = np.random.random((32, 100))
w2 = np.random.random((100, 32))
w3 = np.random.random((32, 3))

In [81]:

y = norm(norm(norm(emd @ w1) @ w2) @ w3)
y

Out[81]:

array([[0.32940147, 0.34281657, 0.32778196]])

  只是参数(模型)更复杂了些,其他都是一样的。

  稍微复杂点的是多标签分类和层次分类,这俩因为输出的都是多个标签,处理起来要麻烦一些,不过它们的处理方式是类似的。我们以多标签分类来说明,假设有10个标签,给定输入文本,可能是其中任意多个标签。这就意味着我们需要将10个标签的概率分布都表示出来。可以针对每个标签做个二分类,也就是说输出的大小是10×2的,每一行表示「是否是该标签」的概率分布。

In [168]:

def norm(z):
    axis = -1
    exp = np.exp(z)
    return exp / np.expand_dims(np.sum(exp, axis=axis), axis)

In [173]:

np.random.seed(42)

emd = np.random.normal(0, 1, (1, 32))
W = np.random.random((10, 32, 2))
y = norm(emd @ W)
y.shape

Out[173]:

(10, 1, 2)

In [174]:

y

Out[174]:

array([[[0.66293305, 0.33706695]],

       [[0.76852603, 0.23147397]],

       [[0.59404023, 0.40595977]],

       [[0.04682992, 0.95317008]],

       [[0.84782999, 0.15217001]],

       [[0.01194495, 0.98805505]],

       [[0.96779413, 0.03220587]],

       [[0.04782398, 0.95217602]],

       [[0.41894957, 0.58105043]],

       [[0.43668264, 0.56331736]]])

  这里的输出每一行有两个值,分别表示标签「是/0」和「是/1」的概率,比如第一行,0的概率为0.66,1的概率为0.34。需要注意的是归一化时,我们要指定维度,否则就变成所有的值加起来为1了,这就不对了。

  上面是句子级别分类(Sequence Classification)的逻辑,我们必须再次说明,实际比上面要复杂得多,但基本思路是这样的。我们在LLM时代也并不需要自己去构建模型了,本章后面会讲到如何使用LLM的API进行各类任务。

1.2 Token级别的分类

  接下来看Token级别的分类,有了刚刚的基础,这个看起来就比较容易了。它最大的特点是,Embedding是针对每个Token的。也就是说,如果给定文本长度为10,假定维度依然是32,那Embedding的大小就为:(1, 10, 32)。比刚刚的(1, 32)多了个10。换句话说,这个文本的每一个Token都是一个32维的向量。再通俗一点来说,对于模型来说,无论你是1个Token、2个Token还是100个Token,都可以统一看待--都是固定维度的一个向量表示。

  下面我们假设Label共5个:B-PER,I-PER,B-WORK,I-WORK,O。

In [175]:

emd = np.random.normal(0, 1, (1, 10, 32))

In [176]:

W = np.random.random((32, 5))

In [177]:

z = emd @ W
y = norm(z)

In [178]:

y.shape

Out[178]:

(1, 10, 5)

In [179]:

y

Out[179]:

array([[[0.23850186, 0.04651826, 0.12495322, 0.28764271, 0.30238396],
        [0.06401011, 0.3422055 , 0.54911626, 0.01179874, 0.03286939],
        [0.18309536, 0.62132479, 0.09037235, 0.06016401, 0.04504349],
        [0.01570559, 0.0271437 , 0.20159052, 0.12386611, 0.63169408],
        [0.1308541 , 0.06810165, 0.61293236, 0.00692553, 0.18118637],
        [0.08011671, 0.04648297, 0.00200392, 0.02913598, 0.84226041],
        [0.05143706, 0.09635837, 0.00115594, 0.83118412, 0.01986451],
        [0.03721064, 0.14529403, 0.03049475, 0.76177941, 0.02522117],
        [0.24154874, 0.28648044, 0.11024747, 0.35380566, 0.0079177 ],
        [0.10965428, 0.00432547, 0.08823724, 0.00407713, 0.79370588]]])

  注意看,每一行表示一个Token是某个Label的概率分布(也就是每一行加起来为1),比如第一行:

In [182]:

sum((0.23850186, 0.04651826, 0.12495322, 0.28764271, 0.30238396))

Out[182]:

1.00000001

  表示啥呢?表示第一个Token的Label是O,那真实的Label和这个预测的之间就可能有误差,通过误差就可以更新参数,从而使得之后预测时能预测到正确的Label(也就是正确位置的概率最大)。

  好了,关于NLU常见问题的基本原理我们就介绍到这里了,如果你对其中的很多细节感兴趣,那么可以关注DataWhale的NLP相关教程,从小项目开始一步一步构建自己的知识体系。

2. 相关API

2.1 LMAS GPT API

  这里我们介绍openai的GPT接口,利用GPT大模型的In-Context能力进行Zero-Shot或Few-Shot的推理。这里有四个概念需要先稍微解释一下:

  • GPT:全称是Generative Pretrained Transformer,生成式预训练Transformer。大家只要知道它是一个大模型的名字即可。

  • In-Context:简单来说就是一种上下文能力,也就是模型只要根据输入的文本就可以自动给出对应的结果,这种能力是大模型在学习了非常多的文本后获得的。可以看作是一种内在的理解能力。

  • Zero-Shot:直接给模型文本,让它给出你要的标签或输出。

  • Few-Shot:给模型一些类似的Case(输入+输出),再拼上一个新的没有输出的输入,让模型给出输出。

  如果对In-Context更多细节感兴趣的,可以阅读【相关文献1】。

  接下来,我们就可以用同一个接口,只要通过构造不同的输入就可以完成不同的任务。换句话说,通过使用GPT大模型的In-Context能力,我们只需要输入的时候告诉模型我们的任务就行。

  我们看看具体的用法:

In [9]:

from openai import OpenAI

client = OpenAI()
OPENAI_API_KEY = "填入专属的API key"
api_key = OPENAI_API_KEY

In [10]:

def complete(prompt):
    response = client.Completion.create(
      model="gpt-3.5-turbo-instruct",
      prompt=prompt,
      temperature=0,
      max_tokens=64,
      top_p=1.0,
      frequency_penalty=0.0,
      presence_penalty=0.0
    )
    ans = response.choices[0].text
    return ans

  这个是Completion接口,可以理解为续写,这个续写可不止能帮助我们完成一段话或一篇文章,而且可以用来做各种各样的任务,比如咱们这章要讲的分类和实体提取任务。

  相比上一章的Embedding接口,它的接口参数要复杂多了,重要的参数包括:

  • model:指定的模型,text-davinci-003就是其中一个模型,大家可以根据自己的需要,参考官方链接进行选择,一般需要综合价格和效果进行权衡。

  • prompt:提示,默认为<|endoftext|>,它是模型在训练期间看到的文档分隔符,因此如果未指定Prompt,模型将像从新文档的开始一样。简单来说,就是给模型的提示语,咱们下面有例子。

  • max_tokens:生成的最大Token数,默认为16。注意这里的Token数不一定是字数(但对中文来说几乎一致)。Prompt+生成的文本,所有的Token长度不能超过模型的上下文长度(一般是2048,新的是4096,具体可以参考上面的链接)。

  • temperature:温度,默认为1。采样温度,介于0和2之间。较高的值(如0.8)将使输出更加随机,而较低的值(如0.2)将使其更加集中和确定。通常建议调整这个参数或下面的top_p,但不能同时更改两者。

  • top_p:采样topN分布,默认为1。0.1意味着Next Token只选择前10%概率的。

  • stop:停止的Token或序列,默认为null,最多4个,如果遇到该Token或序列就停止继续生成。注意生成的结果中不包含stop。

  • presence_penalty:存在惩罚,默认为0,介于-2.0和2.0之间的数字。正值会根据新Token到目前为止是否出现在文本中来惩罚它们,从而增加模型讨论新主题的可能性。

  • frequency_penalty:频率惩罚,默认为0,介于-2.0和2.0之间的数字。正值会根据新Token到目前为止在文本中的现有频率来惩罚新Token,降低模型重复生成同一行的可能性。

  更多可以参考:API Reference - OpenAI API。在大部分情况下,我们只需考虑上面这几个参数即可,甚至大部分时候只需要前两个参数,其他的用默认也行。不过熟悉上面的参数将帮助你更好地使用API。

  先来个最简单的情感分类的例子,我们分别展示一下Zero-Shot和Few-Shot。

In [200]:

# Zero-Shot
# 来自openai官方文档
prompt="""The following is a list of companies and the categories they fall into:

Apple, Facebook, Fedex

Apple
Category:
"""

In [201]:

ans = complete(prompt)
print(ans)
Technology 

Facebook
Category:
Social Media 

Fedex
Category:
Logistics and Delivery

In [202]:

# Few-Shot
prompt = """今天真开心。-->正向
心情不太好。-->负向
我们是快乐的年轻人。-->
"""

In [203]:

ans = complete(prompt)
print(ans)
正向

  再来个实体识别的例子。

In [204]:

# Zero-Shot 来自openai官方文档
prompt = """
From the text below, extract the following entities in the following format:
Companies: <comma-separated list of companies mentioned>
People & titles: <comma-separated list of people mentioned (with their titles or roles appended in parentheses)>

Text:
In March 1981, United States v. AT&T came to trial under Assistant Attorney General William Baxter. AT&T chairman Charles L. Brown thought the company would be gutted. He realized that AT&T would lose and, in December 1981, resumed negotiations with the Justice Department. Reaching an agreement less than a month later, Brown agreed to divestiture—the best and only realistic alternative. AT&T's decision allowed it to retain its research and manufacturing arms. The decree, titled the Modification of Final Judgment, was an adjustment of the Consent Decree of 14 January 1956. Judge Harold H. Greene was given the authority over the modified decree....

In 1982, the U.S. government announced that AT&T would cease to exist as a monopolistic entity. On 1 January 1984, it was split into seven smaller regional companies, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, and Pacific Telesis, to handle regional phone services in the U.S. AT&T retains control of its long distance services, but was no longer protected from competition.
"""

In [205]:

ans = complete(prompt)
print(ans)
Companies: AT&T, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, Pacific Telesis
People & titles: William Baxter (Assistant Attorney General), Charles L. Brown (AT&T chairman), Harold H. Greene (Judge)

  上面是官方给的一个Zero-Shot的例子,我们来造一个Few-Shot的例子,实体给设置的稍微特殊一些。

In [35]:

# Few-Shot
prompt = """
根据下面的格式抽取给定Text中的实体:
和弦: <实体用逗号分割>


Text:
增三和弦是大三度+大三度的增五度音,减三和弦是小三度+小三度的减五度音。
和弦:增三和弦,减三和弦

Text:
三和弦是由3个按照三度音程关系排列起来的一组音。大三和弦是大三度+小三度的纯五度音,小三和弦是小三度+大三度的纯五度音。
"""

In [36]:

ans = complete(prompt)
print(ans)
和弦:大三和弦,小三和弦

  Nice,不是么。大家可以尝试如果不给这个例子,它会输出什么。

2.2 ChatGPT Style

  这个是ChatCompletions接口,可以理解为对话(也就是ChatGPT),几乎可以做任意的NLP任务。它的参数和Completion类似,我们依然介绍主要参数:

  • model:指定的模型,gpt-3.5-turbo就是ChatGPT,大家还是可以根据实际情况参考官方给出的列表选择合适的模型。

  • messages:会话消息,支持多轮,多轮就是多条。每一条消息为一个字典,包含「role」和「content」两个字段。如:[{"role": "user", "content": "Hello!"}]

  • temperature:和Completion接口含义一样。

  • top_p:和Completion接口含义一样。

  • stop:和Completion接口含义一样。

  • max_tokens:默认无上限,其他和Completion接口含义一样,也受限于模型的最大上下文长度。

  • presence_penalty:和Completion接口含义一样。

  • frequency_penalty:和Completion接口含义一样。

  更多可以参考:API Reference - OpenAI API,值得再次一提的是,接口支持多轮,而且多轮非常简单,只需要把历史会话加进去就可以了。

  接下来,我们采用ChatGPT方式来做类似的任务。这个输入看起来和上面是比较类似的。

In [38]:

def ask(content):
    response = client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=[{"role": "user", "content": content}]
    )

    ans = response.choices[0].message.content
    return ans

  我们依次尝试上面的例子:

In [212]:

prompt="""The following is a list of companies and the categories they fall into:

Apple, Facebook, Fedex

Apple
Category:
"""

In [214]:

ans = ask(prompt)
print(ans)
Technology/Electronics

Facebook 
Category:
Technology/Social Media

Fedex
Category:
Logistics/Shipping

  嗯,效果也是类似的,不过在ChatGPT这里我们可以更加精简一些:

In [215]:

prompt="""please output the category of the following companies:
Apple, Facebook, Fedex

The output format should be:
<company>
Category:
<category>
"""

In [216]:

ans = ask(prompt)
print(ans)
Apple
Category:
Technology

Facebook
Category:
Technology/Social Media

Fedex
Category:
Delivery/Logistics

  Great! ChatGPT比前面的GPT API更加「聪明」,交互更加自然。我们可以尝试以对话的方式来让它帮忙完成任务。

  实际上,关于这个领域早就有了一个成熟的技术方案:Prompt工程,大家可以进一步阅读【相关文献2】。这里给出一些常见的建议:

  • 清晰,切忌复杂或歧义,如果有术语,应定义清楚。

  • 具体,描述语言应尽量具体,不要抽象活模棱两可。

  • 聚焦,问题避免太泛或开放。

  • 简洁,避免不必要的描述。

  • 相关,主要指主题相关,而且是整个对话期间,不要东一瓢西一瓤。

  新手容易忽略的地方:

  • 没有说明具体的输出目标。

  • 在一次对话中混合多个主题。

  • 让语言模型做数学题。

  • 没有给出想要什么的示例样本。

  • 反向提示。也就是一些反面例子。

  • 要求他一次只做一件事。可以将步骤捆绑在一起一次说清,不要拆的太碎。

  我们来试一下情感分类的例子:

In [217]:

prompt = """请给出下面句子的情感倾向,情感倾向包括三种:正向、中性、负向。
句子:我们是快乐的年轻人。
"""

In [218]:

ans = ask(prompt)
print(ans)
情感倾向:正向

  再来做一下实体的例子:

In [221]:

prompt = """
请抽取给定Text中的实体,实体包括Company和People&Title,对于People,请同时给出它们的Title或role,跟在实体后面,用括号括起。

Text:
In March 1981, United States v. AT&T came to trial under Assistant Attorney General William Baxter. AT&T chairman Charles L. Brown thought the company would be gutted. He realized that AT&T would lose and, in December 1981, resumed negotiations with the Justice Department. Reaching an agreement less than a month later, Brown agreed to divestiture—the best and only realistic alternative. AT&T's decision allowed it to retain its research and manufacturing arms. The decree, titled the Modification of Final Judgment, was an adjustment of the Consent Decree of 14 January 1956. Judge Harold H. Greene was given the authority over the modified decree....

In 1982, the U.S. government announced that AT&T would cease to exist as a monopolistic entity. On 1 January 1984, it was split into seven smaller regional companies, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, and Pacific Telesis, to handle regional phone services in the U.S. AT&T retains control of its long distance services, but was no longer protected from competition.
"""

In [222]:

ans = ask(prompt)
print(ans)
实体抽取结果:
- Company: AT&T, Bell South, Bell Atlantic, NYNEX, American Information Technologies, Southwestern Bell, US West, Pacific Telesis
- People&Title: William Baxter (Assistant Attorney General), Charles L. Brown (AT&T chairman), Judge Harold H. Greene.

  看起来还行。我们最后试一下上面的另一个例子:

In [59]:

prompt = """
根据下面的格式抽取给定Text中的和弦实体,实体必须包括“和弦”两个字。


Desired format:
和弦:<用逗号隔开>

Text:
三和弦是由3个按照三度音程关系排列起来的一组音。大三和弦是大三度+小三度的纯五度音,小三和弦是小三度+大三度的纯五度音。
"""

In [60]:

ans = ask(prompt)
print(ans)
和弦:大三和弦,小三和弦

  我们还用了中英文混合,它也完全没问题。

  大家不妨多多尝试,也可以参考【相关文献 2-10】中的写法,总的来说,它并没有什么标准答案,最多也是一种习惯或约定。大家可以自由尝试,不用有任何负担。

3. NLU应用

3.1 文档问答

  文档问答和上一章的QA有点类似,不过要稍微复杂一点。它会先用QA的方法召回一个相关的文档,然后让模型在这个文档中找出问题的答案。一般的流程还是先召回相关文档,然后做阅读理解任务。阅读理解和实体提取任务有些类似,但它预测的不是具体某个标签,而是答案的Index,即start和end的位置。

  还是举个例子。假设我们的问题是:“北京奥运会举办于哪一年?”

  召回的文档可能是含有北京奥运会举办的新闻,比如类似下面这样的:

第29届夏季奥林匹克运动会(Beijing 2008; Games of the XXIX Olympiad),又称2008年北京奥运会,2008年8月8日晚上8时整在中国首都北京开幕。8月24日闭幕。

  标注就是「2008年」这个答案的索引。

  当然,一个文档里可能有不止一个问题,比如上面的文档,还可以问:“北京奥运会啥时候开幕?”,“北京奥运会什么时候闭幕”,“北京奥运会是第几届奥运会”等问题。

  根据之前的NLP方法,这里实际做起来方案会比较多,也有一定的复杂度;不过总的来说还是分类任务。现在我们有了LLM,问题就变得简单了。依然是两步:

  • 召回:与上一章的QA类似,这次召回的是Doc,这一步其实就是相似Embedding选择最相似的。

  • 回答:将召回来的文档和问题以Prompt的方式提交给Completion/ChatCompletion接口,直接得到答案。

  我们分别用两种不同的接口各举一例,首先看看Completion接口:

In [30]:

from openai import OpenAI
client = OpenAI()

OPENAI_API_KEY = "填入专属的API key"
api_key = OPENAI_API_KEY

In [9]:

def complete(prompt):
    response = client.completions.create(
        prompt=prompt,
        temperature=0,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        model="gpt-3.5-turbo-instruct"
    )
    ans = response.choices[0].text.strip(" \n")
    return ans

In [10]:

# 来自官方文档
prompt = """Answer the question as truthfully as possible using the provided text, and if the answer is not contained within the text below, say "I don't know"

Context:
The men's high jump event at the 2020 Summer Olympics took place between 30 July and 1 August 2021 at the Olympic Stadium.
33 athletes from 24 nations competed; the total possible number depended on how many nations would use universality places 
to enter athletes in addition to the 32 qualifying through mark or ranking (no universality places were used in 2021).
Italian athlete Gianmarco Tamberi along with Qatari athlete Mutaz Essa Barshim emerged as joint winners of the event following
a tie between both of them as they cleared 2.37m. Both Tamberi and Barshim agreed to share the gold medal in a rare instance
where the athletes of different nations had agreed to share the same medal in the history of Olympics. 
Barshim in particular was heard to ask a competition official "Can we have two golds?" in response to being offered a 
'jump off'. Maksim Nedasekau of Belarus took bronze. The medals were the first ever in the men's high jump for Italy and 
Belarus, the first gold in the men's high jump for Italy and Qatar, and the third consecutive medal in the men's high jump
for Qatar (all by Barshim). Barshim became only the second man to earn three medals in high jump, joining Patrik Sjöberg
of Sweden (1984 to 1992).

Q: Who won the 2020 Summer Olympics men's high jump?
A:"""

In [11]:

complete(prompt)

Out[11]:

'Gianmarco Tamberi and Mutaz Essa Barshim emerged as joint winners of the event.'

  上面的Context就是我们召回的文档。

  再看ChatCompletion接口:

In [16]:

prompt = """请根据以下Context回答问题,直接输出答案即可,不用附带任何上下文。

Context:
诺曼人(诺曼人:Nourmands;法语:Normands;拉丁语:Normanni)是在10世纪和11世纪将名字命名为法国诺曼底的人。他们是北欧人的后裔(丹麦人,挪威人和挪威人)的海盗和海盗,他们在首相罗洛(Rollo)的领导下向西弗朗西亚国王查理三世宣誓效忠。经过几代人的同化,并与法兰克和罗马高卢人本地居民融合,他们的后代将逐渐与以西卡罗来纳州为基础的加洛林人文化融合。诺曼人独特的文化和种族身份最初出现于10世纪上半叶,并在随后的几个世纪中持续发展。

问题:
诺曼底在哪个国家/地区?
"""

In [17]:

def ask(content):
    response = client.completions.create(
        model="gpt-3.5-turbo", 
        messages=[{"role": "user", "content": content}]
    )

    ans = response.choices[0].message.content
    return ans

In [18]:

ans = ask(prompt)
print(ans)
法国。

  看起来还行,我们接下来就把整个流程串起来,先用Completion接口实现(便宜),不过也很方便替换过去,毕竟输入都不变(都是Prompt)。

  首先是加载数据集,取自:openai-cookbook/olympics-1-collect-data.ipynb at 1f6c2304b401e931928e74e978d9a0b8a40d1cf7 · openai/openai-cookbook

In [74]:

import pandas as pd
df = pd.read_csv("./dataset/olympics_sections_text.csv")
df.shape

Out[74]:

(3964, 4)

In [75]:

df.head()

Out[75]:

title

heading

content

tokens

0

2020 Summer Olympics

Summary

The 2020 Summer Olympics (Japanese: 2020年夏季オリン...

726

1

2020 Summer Olympics

Host city selection

The International Olympic Committee (IOC) vote...

126

2

2020 Summer Olympics

Impact of the COVID-19 pandemic

In January 2020, concerns were raised about th...

374

3

2020 Summer Olympics

Qualifying event cancellation and postponement

Concerns about the pandemic began to affect qu...

298

4

2020 Summer Olympics

Effect on doping tests

Mandatory doping tests were being severely res...

163

  我们这次不用Redis,换一个工具:Qdrant - Vector Search Engine,Qdrant相比Redis的单线程更容易扩展。但我们切记,要根据实际情况选择工具,很多时候过度优化是原罪,适合的就是最好的。我们真正需要做的是将业务逻辑抽象,做到尽量不依赖任何工具,换工具只需要换一个适配器就好。

  依然使用Docker,启动很简单:

docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant`

  自然也少不了客户端的安装:

pip install qdrant-client

  不过首先还是生成Embedding,这一步可以使用get_embedding接口:

In [80]:

openai_client = OpenAI()
def get_embedding(query,model = "text-embedding-ada-002"):
    res = openai_client.embeddings.create(model=model,input = query)
    return res

  或者也可以直接使用原生的Embedding接口,还支持多条一次请求:

In [92]:

def get_embedding_direct(inputs):
    embed_model = "text-embedding-ada-002"

    res = openai_client.embeddings.create(
        input=inputs, model=embed_model
    )
    return res

In [110]:

texts = [v.content for v in df.itertuples()]
len(texts)

Out[110]:

3964

In [111]:

import pnlp

In [123]:

emds = []
for idx, batch in enumerate(pnlp.generate_batches_by_size(texts, 200)):
    response = get_embedding_direct(batch)
    for v in response.data:
        emds.append(v.embedding)
    print(f"batch: {idx} done")
batch: 0 done
batch: 1 done
batch: 2 done
batch: 3 done
batch: 4 done
batch: 5 done
batch: 6 done
batch: 7 done
batch: 8 done
batch: 9 done
batch: 10 done
batch: 11 done
batch: 12 done
batch: 13 done
batch: 14 done
batch: 15 done
batch: 16 done
batch: 17 done
batch: 18 done
batch: 19 done

In [124]:

len(emds), len(emds[0])

Out[124]:

(3964, 1536)

  接下来是创建索引:

In [149]:

from qdrant_client import QdrantClient
client = QdrantClient(host="localhost", port=6333)

  值得注意的是,qdrant还支持内存/文件库,也就是说,可以直接:

In [150]:

# client = QdrantClient(":memory:")
# 或
# client = QdrantClient(path="path/to/db")

  我们还是用server的方式:

In [151]:

from qdrant_client.models import Distance, VectorParams

client.recreate_collection(
    collection_name="doc_qa",
    vectors_config=VectorParams(size=1536, distance=Distance.COSINE),
)

Out[151]:

True

In [136]:

# client.delete_collection("doc_qa")

  然后是把向量入库:

In [152]:

payload=[
    {"content": v.content, "heading": v.heading, "title": v.title, "tokens": v.tokens} for v in df.itertuples()
]

In [153]:

client.upload_collection(
    collection_name="doc_qa",
    vectors=emds,
    payload=payload
)

  接下来进行查询:

In [154]:

query = "Who won the 2020 Summer Olympics men's high jump?"
query_vector = get_embedding(query).data[0].embedding
hits = client.search(
    collection_name="doc_qa",
    query_vector=query_vector,
    limit=5
)

In [155]:

hits

Out[155]:

[ScoredPoint(id=236, version=3, score=0.90316474, payload={'content': 'The men\'s high jump event at the 2020 Summer Olympics took place between 30 July and 1 August 2021 at the Olympic Stadium. 33 athletes from 24 nations competed; the total possible number depended on how many nations would use universality places to enter athletes in addition to the 32 qualifying through mark or ranking (no universality places were used in 2021). Italian athlete Gianmarco Tamberi along with Qatari athlete Mutaz Essa Barshim emerged as joint winners of the event following a tie between both of them as they cleared 2.37m. Both Tamberi and Barshim agreed to share the gold medal in a rare instance where the athletes of different nations had agreed to share the same medal in the history of Olympics. Barshim in particular was heard to ask a competition official "Can we have two golds?" in response to being offered a \'jump off\'. Maksim Nedasekau of Belarus took bronze. The medals were the first ever in the men\'s high jump for Italy and Belarus, the first gold in the men\'s high jump for Italy and Qatar, and the third consecutive medal in the men\'s high jump for Qatar (all by Barshim). Barshim became only the second man to earn three medals in high jump, joining Patrik Sjöberg of Sweden (1984 to 1992).', 'heading': 'Summary', 'title': "Athletics at the 2020 Summer Olympics – Men's high jump", 'tokens': 275}, vector=None),
 ScoredPoint(id=313, version=4, score=0.88258004, payload={'content': "The men's long jump event at the 2020 Summer Olympics took place between 31 July and 2 August 2021 at the Japan National Stadium. Approximately 35 athletes were expected to compete; the exact number was dependent on how many nations use universality places to enter athletes in addition to the 32 qualifying through time or ranking (1 universality place was used in 2016). 31 athletes from 20 nations competed. Miltiadis Tentoglou won the gold medal, Greece's first medal in the men's long jump. Cuban athletes Juan Miguel Echevarría and Maykel Massó earned silver and bronze, respectively, the nation's first medals in the event since 2008.", 'heading': 'Summary', 'title': "Athletics at the 2020 Summer Olympics – Men's long jump", 'tokens': 136}, vector=None),
 ScoredPoint(id=284, version=4, score=0.8821836, payload={'content': "The men's pole vault event at the 2020 Summer Olympics took place between 31 July and 3 August 2021 at the Japan National Stadium. 29 athletes from 18 nations competed. Armand Duplantis of Sweden won gold, with Christopher Nilsen of the United States earning silver and Thiago Braz of Brazil taking bronze. It was Sweden's first victory in the event and first medal of any color in the men's pole vault since 1952. Braz, who had won in 2016, became the ninth man to earn multiple medals in the pole vault.", 'heading': 'Summary', 'title': "Athletics at the 2020 Summer Olympics – Men's pole vault", 'tokens': 112}, vector=None),
 ScoredPoint(id=222, version=3, score=0.876395, payload={'content': "The men's triple jump event at the 2020 Summer Olympics took place between 3 and 5 August 2021 at the Japan National Stadium. Approximately 35 athletes were expected to compete; the exact number was dependent on how many nations use universality places to enter athletes in addition to the 32 qualifying through time or ranking (2 universality places were used in 2016). 32 athletes from 19 nations competed. Pedro Pichardo of Portugal won the gold medal, the nation's second victory in the men's triple jump (after Nelson Évora in 2008). China's Zhu Yaming took silver, while Hugues Fabrice Zango earned Burkina Faso's first Olympic medal in any event.", 'heading': 'Summary', 'title': "Athletics at the 2020 Summer Olympics – Men's triple jump", 'tokens': 139}, vector=None),
 ScoredPoint(id=205, version=3, score=0.86075026, payload={'content': "The men's 110 metres hurdles event at the 2020 Summer Olympics took place between 3 and 5 August 2021 at the Olympic Stadium. Approximately forty athletes were expected to compete; the exact number was dependent on how many nations used universality places to enter athletes in addition to the 40 qualifying through time or ranking (1 universality place was used in 2016). 40 athletes from 29 nations competed. Hansle Parchment of Jamaica won the gold medal, the nation's second consecutive victory in the event. His countryman Ronald Levy took bronze. American Grant Holloway earned silver, placing the United States back on the podium in the event after the nation missed the medals for the first time in Rio 2016 (excluding the boycotted 1980 Games).", 'heading': 'Summary', 'title': "Athletics at the 2020 Summer Olympics – Men's 110 metres hurdles", 'tokens': 149}, vector=None)]

  接下来将这个过程包装在Prompt生成过程中:

In [166]:

MAX_SECTION_LEN = 500
SEPARATOR = "\n* "
separator_len = 3

In [167]:

def construct_prompt(question: str):
    query_vector = get_embedding(question).data[0].embedding
    hits = client.search(
        collection_name="doc_qa",
        query_vector=query_vector,
        limit=5
    )
    
    choose = []
    length = 0
    indexes = []
     
    for hit in hits:
        doc = hit.payload
        length += doc["tokens"] + separator_len
        if length > MAX_SECTION_LEN:
            break
            
        choose.append(SEPARATOR + doc["content"].replace("\n", " "))
        indexes.append(doc["title"] + doc["heading"])
            
    # Useful diagnostic information
    print(f"Selected {len(choose)} document sections:")
    print("\n".join(indexes))
    
    header = """Answer the question as truthfully as possible using the provided context, and if the answer is not contained within the text below, say "I don't know."\n\nContext:\n"""
    
    return header + "".join(choose) + "\n\n Q: " + question + "\n A:"

In [175]:

prompt = construct_prompt("Who won the 2020 Summer Olympics men's high jump?")

print("===\n", prompt)
Selected 2 document sections:
Athletics at the 2020 Summer Olympics – Men's high jumpSummary
Athletics at the 2020 Summer Olympics – Men's long jumpSummary
===
 Answer the question as truthfully as possible using the provided context, and if the answer is not contained within the text below, say "I don't know."

Context:

* The men's high jump event at the 2020 Summer Olympics took place between 30 July and 1 August 2021 at the Olympic Stadium. 33 athletes from 24 nations competed; the total possible number depended on how many nations would use universality places to enter athletes in addition to the 32 qualifying through mark or ranking (no universality places were used in 2021). Italian athlete Gianmarco Tamberi along with Qatari athlete Mutaz Essa Barshim emerged as joint winners of the event following a tie between both of them as they cleared 2.37m. Both Tamberi and Barshim agreed to share the gold medal in a rare instance where the athletes of different nations had agreed to share the same medal in the history of Olympics. Barshim in particular was heard to ask a competition official "Can we have two golds?" in response to being offered a 'jump off'. Maksim Nedasekau of Belarus took bronze. The medals were the first ever in the men's high jump for Italy and Belarus, the first gold in the men's high jump for Italy and Qatar, and the third consecutive medal in the men's high jump for Qatar (all by Barshim). Barshim became only the second man to earn three medals in high jump, joining Patrik Sjöberg of Sweden (1984 to 1992).
* The men's long jump event at the 2020 Summer Olympics took place between 31 July and 2 August 2021 at the Japan National Stadium. Approximately 35 athletes were expected to compete; the exact number was dependent on how many nations use universality places to enter athletes in addition to the 32 qualifying through time or ranking (1 universality place was used in 2016). 31 athletes from 20 nations competed. Miltiadis Tentoglou won the gold medal, Greece's first medal in the men's long jump. Cuban athletes Juan Miguel Echevarría and Maykel Massó earned silver and bronze, respectively, the nation's first medals in the event since 2008.

 Q: Who won the 2020 Summer Olympics men's high jump?
 A:

In [170]:

def complete(prompt):
    response = openai_client.completions.create(
        prompt=prompt,
        temperature=0,
        max_tokens=300,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        model="gpt-3.5-turbo-instruct"
    )
    ans = response.choices[0].text.strip(" \n")
    return ans

In [171]:

complete(prompt)

Out[171]:

'Gianmarco Tamberi and Mutaz Essa Barshim emerged as joint winners of the event following a tie between both of them as they cleared 2.37m. Both Tamberi and Barshim agreed to share the gold medal.'

  试试ChatCompletion(ChatGPT)接口:

In [174]:

def ask(content):
    response = openai_client.chat.completions.create(
        model="gpt-3.5-turbo", 
        messages=[{"role": "user", "content": content}]
    )

    ans = response.choices[0].message.content
    return ans

In [176]:

ans = ask(prompt)

In [177]:

ans

Out[177]:

"Gianmarco Tamberi and Mutaz Essa Barshim shared the gold medal in the men's high jump event at the 2020 Summer Olympics."

  再看几个例子:

In [181]:

query = "Why was the 2020 Summer Olympics originally postponed?"
prompt = construct_prompt(query)
answer = complete(prompt)

print(f"\nQ: {query}\nA: {answer}")
Selected 1 document sections:
Concerns and controversies at the 2020 Summer OlympicsSummary

Q: Why was the 2020 Summer Olympics originally postponed?
A: The 2020 Summer Olympics were originally postponed due to the COVID-19 pandemic.

In [182]:

query = "In the 2020 Summer Olympics, how many gold medals did the country which won the most medals win?"
prompt = construct_prompt(query)
answer = complete(prompt)

print(f"\nQ: {query}\nA: {answer}")
Selected 2 document sections:
2020 Summer Olympics medal tableSummary
List of 2020 Summer Olympics medal winnersSummary

Q: In the 2020 Summer Olympics, how many gold medals did the country which won the most medals win?
A: The United States won the most medals overall, with 113, and the most gold medals, with 39.

In [183]:

# ChatGPT
answer = ask(prompt)

print(f"\nQ: {query}\nA: {answer}")
Q: In the 2020 Summer Olympics, how many gold medals did the country which won the most medals win?
A: The country that won the most medals at the 2020 Summer Olympics was the United States, with 113 medals, including 39 gold medals.

In [184]:

query = "What is the tallest mountain in the world?"
prompt = construct_prompt(query)
answer = complete(prompt)

print(f"\nQ: {query}\nA: {answer}")
Selected 3 document sections:
Sport climbing at the 2020 Summer Olympics – Men's combinedRoute-setting
Ski mountaineering at the 2020 Winter Youth Olympics – Boys' individualSummary
Ski mountaineering at the 2020 Winter Youth Olympics – Girls' individualSummary

Q: What is the tallest mountain in the world?
A: I don't know.

In [185]:

# ChatGPT
answer = ask(prompt)

print(f"\nQ: {query}\nA: {answer}")
Q: What is the tallest mountain in the world?
A: I don't know.

  Very Nice, isn't that!

3.2 分类/实体微调

  在前面"相关API"一节,我们已经介绍了各种分类和实体提取的用法。这里会给大家介绍更具体、常见的任务。

  首先是主题分类,简单来说就是给定文本,判断属于哪一类主题。

  我们找一个新闻主题分类的数据集看看,数据集取自:CLUEbenchmark/CLUE,共15个类别。

In [7]:

import pnlp
lines = pnlp.read_file_to_list_dict("./dataset/tnews.json")
len(lines)

Out[7]:

10000

In [8]:

from collections import Counter
ct = Counter([v["label_desc"] for v in lines])

In [9]:

ct.most_common()

Out[9]:

[('news_tech', 1089),
 ('news_finance', 956),
 ('news_entertainment', 910),
 ('news_world', 905),
 ('news_car', 791),
 ('news_sports', 767),
 ('news_culture', 736),
 ('news_military', 716),
 ('news_travel', 693),
 ('news_game', 659),
 ('news_edu', 646),
 ('news_agriculture', 494),
 ('news_house', 378),
 ('news_story', 215),
 ('news_stock', 45)]

  stock这个类别太少了,我们先给它去掉:

In [6]:

lines = [v for v in lines if v["label_desc"] != "news_stock"]
len(lines)

Out[6]:

9955

In [10]:

def get_prompt(text):
    prompt = f"""对给定文本进行分类,类别包括:科技、金融、娱乐、世界、汽车、运动、文化、军事、旅游、游戏、教育、农业、房产、社会、股票。

给定文本:
{text}
类别:
"""
    return prompt

In [11]:

lines[59]

Out[11]:

{'label': '101',
 'label_desc': 'news_culture',
 'sentence': '上联:银笛吹开云天月,下联怎么对?',
 'keywords': ''}

In [12]:

prompt = get_prompt(lines[59]["sentence"])
print(prompt)
对给定文本进行分类,类别包括:科技、金融、娱乐、世界、汽车、运动、文化、军事、旅游、游戏、教育、农业、房产、社会、股票。

给定文本:
上联:银笛吹开云天月,下联怎么对?
类别:

In [17]:

from openai import OpenAI
import os
openai_client = OpenAI()

OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

api_key = OPENAI_API_KEY

In [18]:

def complete(prompt):
    response = openai_client.completions.create(
        prompt=prompt,
        temperature=0,
        max_tokens=10,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        model="text-davinci-003"
    )
    ans = response.choices[0].text.strip(" \n")
    return ans

In [19]:

complete(prompt)

Out[19]:

'文化'

In [20]:

def ask(content):
    response = openai_client.ChatCompletion.create(
        model="gpt-3.5-turbo", 
        messages=[{"role": "user", "content": content}],
        temperature=0,
        max_tokens=10,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
    )

    ans = response.choices[0].message.content
    return ans

In [21]:

ask(prompt)

Out[21]:

'文化'

  再试几个其他类别的:

In [113]:

lines[1]

Out[113]:

{'label': '110',
 'label_desc': 'news_military',
 'sentence': '以色列大规模空袭开始!伊朗多个军事目标遭遇打击,誓言对等反击',
 'keywords': '伊朗,圣城军,叙利亚,以色列国防军,以色列'}

In [114]:

prompt = get_prompt(lines[1]["sentence"])
print(prompt)
对给定文本进行分类,类别包括:科技、金融、娱乐、世界、汽车、文化、军事、旅游、游戏、教育、农业、房产、社会、股票。

给定文本:
以色列大规模空袭开始!伊朗多个军事目标遭遇打击,誓言对等反击
类别:

In [210]:

ask(prompt)

Out[210]:

'军事'

In [211]:

complete(prompt)

Out[211]:

'军事'

In [115]:

lines[2]

Out[115]:

{'label': '104',
 'label_desc': 'news_finance',
 'sentence': '出栏一头猪亏损300元,究竟谁能笑到最后!',
 'keywords': '商品猪,养猪,猪价,仔猪,饲料'}

In [220]:

prompt = get_prompt(lines[2]["sentence"])
print(prompt)
对给定文本进行分类,类别包括:科技、金融、娱乐、世界、汽车、文化、军事、旅游、游戏、教育、农业、房产、社会、股票。

给定文本:
出栏一头猪亏损300元,究竟谁能笑到最后!
类别:

In [221]:

ask(prompt)

Out[221]:

'农业'

In [222]:

complete(prompt)

Out[222]:

'社会'

  这个有点迷糊,不过「农业」这个类别感觉也没问题。当然,遇到有错误的情况,我们起码还有两种手段来解决:

  • Few-Shot,可以每次随机从数据集里抽几条出来作为Prompt的一部分。

  • Fine-Tuning,把我们自己的数据集按指定格式准备好,提交给API,让它帮我们微调一个属于我们自己的模型,它在我们自己的数据集上学习过。

  Few-Shot最关键的是如何找到这个「Few」,换句话说,我们拿什么Case给模型当做参考样本。对于类别比较多的多分类(实际工作中,成百上千中Label是很常见的),Few-Shot即使每个Label一个例子,这上下文长度也不得了。不太现实。这时候其实Few-Shot有点不太方便了。当然,如果我们非要用也不是不行,还是最常用的策略:先召回几个相似句,然后把相似句的内容和类别作为Few-Shot的例子,让接口来预测给定句子的类别。

  不过,我们还可以使用微调(Fine-Tuning)方法在自己的数据集上对模型进行微调,简单来说就是让模型「熟悉」我们独特的数据,进而让其具备在类似数据集上识别出类别的能力。

  接下来,就让我们看看具体怎么做,一般包括三个主要步骤:

  • 准备数据:按接口要求的格式把数据准备好,这里的数据就是我们自己的数据集,至少包含一段文本和一个类别。

  • 微调:使用微调接口将刚刚的数据传递过去,由服务器自动完成微调,微调完成后可以得到一个新的model_id。注意,这个model_id只属于你自己,不要将它公开给其他人。

  • 使用新的模型进行推理:嗯,这个很简单,把原来接口里的model参数内容换成我们的model_id即可。

  咱们接下来就来调调这个多分类模型,我们只取后500条作为训练集(为了快速和省钱……)。

In [12]:

train_lines = lines[-500:]

In [13]:

import pandas as pd

In [14]:

train = pd.DataFrame(train_lines)
train.shape

Out[14]:

(500, 4)

In [15]:

train.head()

Out[15]:

label

label_desc

sentence

keywords

0

103

news_sports

为什么斯凯奇与阿迪达斯脚感很相似,价格却差了近一倍?

达斯勒,阿迪达斯,FOAM,BOOST,斯凯奇

1

100

news_story

女儿日渐消瘦,父母发现有怪物,每天吃女儿一遍

大将军,怪物

2

104

news_finance

另类逼空确认反弹,剑指3200点以上

股票,另类逼空,金融,创业板,快速放大

3

100

news_story

老公在聚会上让我向他的上司敬酒,现在老公哭了,我笑了

远走高飞

4

108

news_edu

女孩上初中之后成绩下降,如何才能提升成绩?

In [16]:

train.label_desc.value_counts()

Out[16]:

news_finance          48
news_tech             47
news_game             46
news_entertainment    46
news_travel           44
news_sports           42
news_military         40
news_world            38
news_car              36
news_culture          35
news_edu              27
news_agriculture      20
news_house            19
news_story            12
Name: label_desc, dtype: int64

股票数据稍微少了些,问题不大。

Step1:准备数据

  要保证有两列为:prompt和completion。

In [17]:

df_train = train[["sentence", "label_desc"]]
df_train.columns = ["prompt", "completion"]
df_train.head()

Out[17]:

prompt

completion

0

为什么斯凯奇与阿迪达斯脚感很相似,价格却差了近一倍?

news_sports

1

女儿日渐消瘦,父母发现有怪物,每天吃女儿一遍

news_story

2

另类逼空确认反弹,剑指3200点以上

news_finance

3

老公在聚会上让我向他的上司敬酒,现在老公哭了,我笑了

news_story

4

女孩上初中之后成绩下降,如何才能提升成绩?

news_edu

  存起来:

In [18]:

df_train.to_json("dataset/tnews-finetuning.jsonl", orient='records', lines=True)

  使用openai命令行工具进行转换:

In [19]:

!openai tools fine_tunes.prepare_data -f dataset/tnews-finetuning.jsonl -q
Analyzing...

- Your file contains 500 prompt-completion pairs
- Based on your data it seems like you're trying to fine-tune a model for classification
- For classification, we recommend you try one of the faster and cheaper models, such as `ada`
- For classification, you can estimate the expected model performance by keeping a held out dataset, which is not used for training
- More than a third of your `prompt` column/key is uppercase. Uppercase prompts tends to perform worse than a mixture of case encountered in normal language. We recommend to lower case the data if that makes sense in your domain. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details
- Your data does not contain a common separator at the end of your prompts. Having a separator string appended to the end of the prompt makes it clearer to the fine-tuned model where the completion should begin. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples. If you intend to do open-ended generation, then you should leave the prompts empty
- All completions start with prefix `news_`. Most of the time you should only add the output data into the completion, without any prefix
- The completion should start with a whitespace character (` `). This tends to produce better results due to the tokenization we use. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details

Based on the analysis we will perform the following actions:
- [Recommended] Lowercase all your data in column/key `prompt` [Y/n]: Y
- [Recommended] Add a suffix separator ` ->` to all prompts [Y/n]: Y
- [Recommended] Remove prefix `news_` from all completions [Y/n]: Y
- [Recommended] Add a whitespace character to the beginning of the completion [Y/n]: Y
- [Recommended] Would you like to split into training and validation set? [Y/n]: Y


Your data will be written to a new JSONL file. Proceed [Y/n]: Y

Wrote modified files to `dataset/tnews-finetuning_prepared_train.jsonl` and `dataset/tnews-finetuning_prepared_valid.jsonl`
Feel free to take a look!

Now use that file when fine-tuning:
> openai api fine_tunes.create -t "dataset/tnews-finetuning_prepared_train.jsonl" -v "dataset/tnews-finetuning_prepared_valid.jsonl" --compute_classification_metrics --classification_n_classes 14

After you’ve fine-tuned a model, remember that your prompt has to end with the indicator string ` ->` for the model to start generating completions, rather than continuing with the prompt.
Once your model starts training, it'll approximately take 14.33 minutes to train a `curie` model, and less for `ada` and `babbage`. Queue will approximately take half an hour per job ahead of you.

  看一下处理成什么样子了:

In [20]:

!head dataset/tnews-finetuning_prepared_train.jsonl
{"prompt":"cf生存特训:火箭弹狂野复仇,为兄弟报仇就要不死不休 ->","completion":" game"}
{"prompt":"哈尔滨 东北抗日联军博物馆 ->","completion":" culture"}
{"prompt":"中国股市中,庄家为何如此猖獗?一文告诉你真相 ->","completion":" finance"}
{"prompt":"天府锦绣又重来 ->","completion":" agriculture"}
{"prompt":"生活,游戏,电影中有哪些词汇稍加修改便可以成为一个非常霸气的名字? ->","completion":" game"}
{"prompt":"法庭上,生父要争夺孩子抚养权,小男孩的发言让生父当场哑口无言 ->","completion":" entertainment"}
{"prompt":"如何才能选到好的深圳大数据培训机构? ->","completion":" edu"}
{"prompt":"有哪些娱乐圈里面的明星追星? ->","completion":" entertainment"}
{"prompt":"东坞原生态野生茶 ->","completion":" culture"}
{"prompt":"亚冠:恒大不胜早有预示,全北失利命中注定 ->","completion":" sports"}

  最好再验证一下每个数据集的类型:

In [21]:

train = pnlp.read_file_to_list_dict("./dataset/tnews-finetuning_prepared_train.jsonl")
valid = pnlp.read_file_to_list_dict("./dataset/tnews-finetuning_prepared_valid.jsonl")

In [28]:

len(Counter([v["completion"] for v in train]))

Out[28]:

14

In [29]:

len(Counter([v["completion"] for v in valid]))

Out[29]:

14

  好的,最明显的是给我们每一个prompt后面加了个标记,除此之外还有(请注意看上面的日志):

  • 小写

  • 去除标签news_前缀

  • 在completion前面加空格

  • 切分为训练和验证集

  这些都是常见的、推荐的预处理,我们就按这样。

Step2:微调

In [8]:

import os

In [9]:

os.environ.setdefault("OPENAI_API_KEY", "填入专属的API key")

Out[9]:

'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'

In [32]:

!openai api fine_tunes.create \
    -t "./dataset/tnews-finetuning_prepared_train.jsonl" \
    -v "./dataset/tnews-finetuning_prepared_valid.jsonl" \
    --compute_classification_metrics --classification_n_classes 14 \
    -m davinci\
    --no_check_if_files_exist
Upload progress: 100%|████████████████████| 41.2k/41.2k [00:00<00:00, 15.6Mit/s]
Uploaded file from ./dataset/tnews-finetuning_prepared_train.jsonl: file-DKjBKHqWFJwo7O8MNZGOcj3F
Upload progress: 100%|████████████████████| 10.5k/10.5k [00:00<00:00, 7.85Mit/s]
Uploaded file from ./dataset/tnews-finetuning_prepared_valid.jsonl: file-j088k3GWqGeqY0o2DDAfWfPh
Created fine-tune: ft-QOkrWkHU0aleR6f5IQw1UpVL
Streaming events until fine-tuning is complete...

(Ctrl-C will interrupt the stream, but not cancel the fine-tune)
[2023-04-04 21:16:33] Created fine-tune: ft-QOkrWkHU0aleR6f5IQw1UpVL

Stream interrupted (client disconnected).
To resume the stream, run:

  openai api fine_tunes.follow -i ft-QOkrWkHU0aleR6f5IQw1UpVL

  其中,-t-v分别指定训练集和验证集,接下来那行用来计算指标,-m指定要微调的模型。最后一行是检查文件是否存在,如果之前传过文件的话,这里可以复用。为了便于演示,我们这里不检查。可以微调的模型和价格参见:Pricing

  另外,值得一提的是:只能微调Completion接口,ChatCompletion不支持微调。也就是说InstructGPT的几个模型是可以微调的,但是ChatGPT不能微调,可参阅:https://platform.openai.com/docs/guides/chat/is-fine-tuning-available-for-gpt-3-5-turbo

  我们这里选davinci,也是咱们之前一直用的。

  可以看到,上面跑一下就断掉了,这个是正常的,我们可以通过另一个API去查看任务的进度。注意,这里的ID是上面日志打出来的ID,每次执行都会变。

In [33]:

!openai api fine_tunes.get -i ft-QOkrWkHU0aleR6f5IQw1UpVL
{
  "created_at": 1680614193,
  "events": [
    {
      "created_at": 1680614193,
      "level": "info",
      "message": "Created fine-tune: ft-QOkrWkHU0aleR6f5IQw1UpVL",
      "object": "fine-tune-event"
    }
  ],
  "fine_tuned_model": null,
  "hyperparams": {
    "batch_size": null,
    "classification_n_classes": 14,
    "compute_classification_metrics": true,
    "learning_rate_multiplier": null,
    "n_epochs": 4,
    "prompt_loss_weight": 0.01
  },
  "id": "ft-QOkrWkHU0aleR6f5IQw1UpVL",
  "model": "davinci",
  "object": "fine-tune",
  "organization_id": "org-bKXddeZffpMS2CUNCCXsW7m5",
  "result_files": [],
  "status": "pending",
  "training_files": [
    {
      "bytes": 41212,
      "created_at": 1680614191,
      "filename": "./dataset/tnews-finetuning_prepared_train.jsonl",
      "id": "file-DKjBKHqWFJwo7O8MNZGOcj3F",
      "object": "file",
      "purpose": "fine-tune",
      "status": "processed",
      "status_details": null
    }
  ],
  "updated_at": 1680614193,
  "validation_files": [
    {
      "bytes": 10507,
      "created_at": 1680614193,
      "filename": "./dataset/tnews-finetuning_prepared_valid.jsonl",
      "id": "file-j088k3GWqGeqY0o2DDAfWfPh",
      "object": "file",
      "purpose": "fine-tune",
      "status": "processed",
      "status_details": null
    }
  ]
}

  或者用它刚刚给的提示。注意换ID!

In [129]:

!openai api fine_tunes.follow -i ft-QOkrWkHU0aleR6f5IQw1UpVL
[2023-04-04 14:32:14] Created fine-tune: ft-5LDv5IiFqPvLob3KkThWLTUG

Stream interrupted (client disconnected).
To resume the stream, run:

  openai api fine_tunes.follow -i ft-5LDv5IiFqPvLob3KkThWLTUG

  注意,这个是follow,刚刚上面那个是get。大家可以通过openai api --help查看更多:

In [54]:

!openai api --help
usage: openai api [-h]
                  {engines.list,engines.get,engines.update,engines.generate,chat_completions.create,completions.create,deployments.list,deployments.get,deployments.delete,deployments.create,models.list,models.get,models.delete,files.create,files.get,files.delete,files.list,fine_tunes.list,fine_tunes.create,fine_tunes.get,fine_tunes.results,fine_tunes.events,fine_tunes.follow,fine_tunes.cancel,fine_tunes.delete,image.create,image.create_edit,image.create_variation,audio.transcribe,audio.translate}
                  ...

positional arguments:
  {engines.list,engines.get,engines.update,engines.generate,chat_completions.create,completions.create,deployments.list,deployments.get,deployments.delete,deployments.create,models.list,models.get,models.delete,files.create,files.get,files.delete,files.list,fine_tunes.list,fine_tunes.create,fine_tunes.get,fine_tunes.results,fine_tunes.events,fine_tunes.follow,fine_tunes.cancel,fine_tunes.delete,image.create,image.create_edit,image.create_variation,audio.transcribe,audio.translate}
                        All API subcommands

optional arguments:
  -h, --help            show this help message and exit

  建议大家过段时间get一下进度就好,不需要一直follow。这里可能要等一段时间,等排队完成后进入训练阶段就很快了。

  主要看status是什么状态,依然注意要换ID。

In [10]:

!openai api fine_tunes.get -i ft-QOkrWkHU0aleR6f5IQw1UpVL
{
  "created_at": 1680614193,
  "events": [
    {
      "created_at": 1680614193,
      "level": "info",
      "message": "Created fine-tune: ft-QOkrWkHU0aleR6f5IQw1UpVL",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680614845,
      "level": "info",
      "message": "Fine-tune costs $2.33",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680614846,
      "level": "info",
      "message": "Fine-tune enqueued",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617657,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 31",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617785,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 30",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617805,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 29",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617809,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 28",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617918,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 27",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680617928,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 26",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618038,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 25",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618050,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 24",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618087,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 23",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618096,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 22",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618207,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 21",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618256,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 20",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618268,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 19",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618336,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 18",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618372,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 17",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618445,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 16",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618488,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 15",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618582,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 14",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618612,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 13",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618652,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 12",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618693,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 11",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618717,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 10",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618759,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 9",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618790,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 8",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618841,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 7",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618886,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 6",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618899,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 5",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618928,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 4",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680618979,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 3",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619057,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 2",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619065,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 1",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619127,
      "level": "info",
      "message": "Fine-tune is in the queue. Queue number: 0",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619227,
      "level": "info",
      "message": "Fine-tune started",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619443,
      "level": "info",
      "message": "Completed epoch 1/4",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619573,
      "level": "info",
      "message": "Completed epoch 2/4",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619701,
      "level": "info",
      "message": "Completed epoch 3/4",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619829,
      "level": "info",
      "message": "Completed epoch 4/4",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619890,
      "level": "info",
      "message": "Uploaded model: davinci:ft-personal-2023-04-04-14-51-29",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619891,
      "level": "info",
      "message": "Uploaded result file: file-xvIjsDl5aFEtXWOVY6nZyLcD",
      "object": "fine-tune-event"
    },
    {
      "created_at": 1680619891,
      "level": "info",
      "message": "Fine-tune succeeded",
      "object": "fine-tune-event"
    }
  ],
  "fine_tuned_model": "davinci:ft-personal-2023-04-04-14-51-29",
  "hyperparams": {
    "batch_size": 1,
    "classification_n_classes": 14,
    "compute_classification_metrics": true,
    "learning_rate_multiplier": 0.1,
    "n_epochs": 4,
    "prompt_loss_weight": 0.01
  },
  "id": "ft-QOkrWkHU0aleR6f5IQw1UpVL",
  "model": "davinci",
  "object": "fine-tune",
  "organization_id": "org-bKXddeZffpMS2CUNCCXsW7m5",
  "result_files": [
    {
      "bytes": 82416,
      "created_at": 1680619891,
      "filename": "compiled_results.csv",
      "id": "file-xvIjsDl5aFEtXWOVY6nZyLcD",
      "object": "file",
      "purpose": "fine-tune-results",
      "status": "processed",
      "status_details": null
    }
  ],
  "status": "succeeded",
  "training_files": [
    {
      "bytes": 41212,
      "created_at": 1680614191,
      "filename": "./dataset/tnews-finetuning_prepared_train.jsonl",
      "id": "file-DKjBKHqWFJwo7O8MNZGOcj3F",
      "object": "file",
      "purpose": "fine-tune",
      "status": "processed",
      "status_details": null
    }
  ],
  "updated_at": 1680619892,
  "validation_files": [
    {
      "bytes": 10507,
      "created_at": 1680614193,
      "filename": "./dataset/tnews-finetuning_prepared_valid.jsonl",
      "id": "file-j088k3GWqGeqY0o2DDAfWfPh",
      "object": "file",
      "purpose": "fine-tune",
      "status": "processed",
      "status_details": null
    }
  ]
}

  过了很长一段时间,终于成功了。微调结束后,我们还可以通过下面的命令查看结果:

In [11]:

# -i 就是上面微调的模型id,就是`id`字段
!openai api fine_tunes.results -i ft-QOkrWkHU0aleR6f5IQw1UpVL > metric.csv

In [1]:

import pandas as pd
metric = pd.read_csv('metric.csv')
metric[metric['classification/accuracy'].notnull()].tail(1)

Out[1]:

step

elapsed_tokens

elapsed_examples

training_loss

training_sequence_accuracy

training_token_accuracy

validation_loss

validation_sequence_accuracy

validation_token_accuracy

classification/accuracy

classification/weighted_f1_score

1601

1602

83226

1602

0.008739

1.0

1.0

NaN

NaN

NaN

0.63

0.619592

In [17]:

step_acc = metric[metric['classification/accuracy'].notnull()]['classification/accuracy']#.reset_index()

In [33]:

import matplotlib.pyplot as plt
%matplotlib inline

In [34]:

import matplotlib.pyplot as plt
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10 ,6))
ax.plot(step_acc.index, step_acc.values, "k-", lw=1, alpha=1.0)
ax.set_xlabel("Step")
ax.set_ylabel("Accuracy");

In [25]:

fig.savefig("./images/ChatGPT-CH03-1.png")

  这个Accuracy其实是非常一般的,应该是咱们给的语料太少的缘故。我们先看下怎么使用。

Step3:使用

In [40]:

prompt = get_prompt(lines[2]["sentence"])
print(prompt)
对给定文本进行分类,类别包括:科技、金融、娱乐、世界、汽车、文化、军事、旅游、游戏、教育、农业、房产、社会、股票。

给定文本:
出栏一头猪亏损300元,究竟谁能笑到最后!
类别:

In [41]:

def complete(prompt, model, max_tokens=2):
    response = openai_client.completions.create(
        prompt=prompt,
        temperature=0,
        max_tokens=max_tokens,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        model=model
    )
    ans = response.choices[0].text.strip(" \n")
    return ans

In [43]:

# 原来的,微调之前的
complete(prompt, "text-davinci-003", 5)

Out[43]:

'社会'

  注意咱们的prompt也要改一下。预期应该要返回一个英文单词,回忆一下训练数据里面的completion。我们这里为了方便说明,特意在前面用了中文标签,微调时用英文(就是希望它能直接输出英文标签,表示微调有效)。大家在实际使用时务必要统一。

  我们把模型换成刚刚微调的(就是上面返回结果中的fine_tuned_model字段)。

In [44]:

# 微调后的
prompt = lines[2]["sentence"] + " ->"
complete(prompt, "davinci:ft-personal-2023-04-04-14-51-29", 1)

Out[44]:

'agriculture'

  咦——居然变成农业了,它和之前ChatGPT(ChatCompletion)的输出一样,不过依然不是我们训练数据集里的「金融」。我想这个句子看起来确实更加像农业主题,放在农业主题下应该也没问题。而且,我们的训练数据集里并没有包含这条样本。所以,这个问题不太大。

  如果我们非要它变成金融的,可以把这条数据也丢给微调接口,微调后应该就可以得到我们训练集里给的类别了。

  上面我们介绍了主题分类的微调。实体抽取的微调也是类似的,它推荐的输入格式如下:

In [63]:

{"prompt":"<any text, for example news article>\n\n###\n\n", "completion":" <list of entities, separated by a newline> END"}

Out[63]:

{'prompt': '<any text, for example news article>\n\n###\n\n',
 'completion': ' <list of entities, separated by a newline> END'}

  举个例子:

In [64]:

{"prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n", "completion":" Portugal\nUK\nNepal mutation\nIndian variant END"}

Out[64]:

{'prompt': 'Portugal will be removed from the UK\'s green travel list from Tuesday, amid rising coronavirus cases and concern over a "Nepal mutation of the so-called Indian variant". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n',
 'completion': ' Portugal\nUK\nNepal mutation\nIndian variant END'}

  相信大家应该很容易理解,不妨自己做一些尝试。尤其是给一些专业领域的实体进行微调,对比一下微调前后的效果。

  如果大家对这块内容感兴趣,可以进一步阅读【相关文献11和12】。

3.3 智能对话

  智能对话,有时候也叫智能客服、对话机器人、聊天机器人等等。总之就是和用户通过聊天窗口进行交互的一种技术。传统的对话机器人(下面都这么叫了)一般包括三个大的模块:

  • NLU:负责对用户输入进行理解。我们在本章一开始已经提到了,主要就是意图分类+实体识别这两种技术。实际中还可能有实体关系抽取、情感识别等组件。

  • DM:Dialogue Management,对话管理。就是在拿到NLU的结果后,如何确定机器人的回复,也就是对话方向的控制。

  • NLG:Natural Language Generation,自然语言生成。就是生成最终要回复给用户的输出。

  对话机器人一般包括三种,不同类型的技术方案侧重有所不同。常见的类型如下:

  • 任务型机器人

  • 问答型机器人

  • 闲聊型机器人

  任务型机器人主要用来完成特定的任务,比如订机票、订餐等,这一类机器人最关键的是要获取完成任务所需的各种信息(专业术语叫:槽位)。整个对话过程其实可以看作是一个填槽过程,通过与用户不断对话获取到需要的槽位信息。比如订餐这个任务,就餐人数、就餐时间、联系人电话等就是基本信息,机器人就要想办法获取到这些信息。这里NLU就是重头,DM一般使用两种方法:模型控制或流程图控制。前者通过模型自动学习来实现流转,后者则根据意图类型进行流转控制。

  问答型机器人主要用来回复用户问题,和上一章介绍的QA有点类似,平时我们常见的客服机器人往往是这种类型。它们更重要的是Question的匹配,DM相对弱一些。

  闲聊机器人就是和客户瞎扯淡的机器人,没啥实际作用。

  以上是大致的分类,但真实场景中的对话机器人往往是多种功能的结合体。更加适合从主动发起/被动接受这个角度来划分。

  • 主动发起对话的机器人

  • 被动接受对话的机器人

  前者一般是以外呼的方式进行,营销、催款、通知等都是常见的场景。这种对话机器人一般不闲聊,电话费不允许。它们基本都是带着特定任务或目的走流程,流程走完就挂断结束。与用户的互动更多是以QA的形式完成,因为主动权在机器人手里,所以流程一遍都是固定控制的,甚至QA的数量、回答次数也会控制。

  后者一般是以网页或客户端的形式存在,绝大部分时候都是用户找上门来了,比如大部分公司首页都有个「智能客服」,就是类似功能。它们以QA为主,辅以闲聊。稍微复杂点的是上面提到的任务型机器人,需要不断收集槽位信息。

  ChatGPT时代,智能对话机器人会有什么新变化吗?接下来,我们探讨一下这方面内容。

  首先,可以肯定的是ChatGPT极大的扩展了对话机器人的边界,在此之前其实有不少端到端的方案,感兴趣的读者可以略读一下【相关文献13】。之前的方案现在看来是有点复杂、繁琐,效果还不好。所以之前除了闲聊,很少有真正使用端到端方案的对话机器人。不过ChatGPT的强大In-Context能力不仅让使用更加简单(我们只需把历史对话分角色放进去就好了),而且效果也更好,除了闲聊,问答型机器人它也可以很擅长,交互更加humanable。

  我们具体来展开说说它可以做的,我们尽量聚焦当下能做到的,不做过多未来畅想。

  • 作为问答类产品,比如知识问答、情感咨询、心理咨询等等,完全称得上诸事不决ChatGPT。举几个简单例子,比如问它编程概念(如闭包的作用),问它如何追一个心仪的女孩子,问它怎么避免焦虑等等。它的大部分回答绝对能让你眼前一亮。这可是之前只能闲聊的机器人完全够不着的高度。

  • 作为智能客服,通过与企业知识库结合(In-Context方式和微调方式①)完全可以胜任客服工作,而且相比之前的QA类的客服,它回答更加个性化,效果也更好(如果不记得了,回忆一下前面“文档问答”的内容)。

  • 作为智能营销机器人,智能客服更加偏向被动、为用户答疑解惑的方向,营销机器人则更加主动一些,它会根据已存储的用户信息,主动向用户推荐相关产品,根据预设的目标(可以理解为槽位,长期要收集的信息)向用户发起对话。它还可以同时负责维护客户关系。

  • 作为NPC(Non-Player Character)、陪聊机器人等休闲娱乐类产品。

  • 作为教育、培训的导师,可以进行一对一教学,尤其适合语言、编程类学习。

  这些都是它确定可以做的,为什么能做?归根结底还是其大规模参数所学到的知识和具备的理解力。尤其是后者,应该是决定性的(只有知识就是Google搜索引擎)。

  当然,并不是什么都要ChatGPT,我们千万要避免「手里有锤子,到处找钉子」的思维方式。某位哲人说过,一项新技术的出现,短期内总是被高估,长期内总是被低估。ChatGPT绝对是划时代的,但也不意味着你什么都要ChatGPT一下。比如,某些分类和实体抽取任务,之前的方法已经能达到非常好的效果,这时候就完全不需要替换。我们知道很多实际任务它并不会随着技术发展有太多变化,比如分类任务,难道你出来个新技术,分类任务就不是分类任务了吗。技术的更新会让我们的效率得到提升,也就是说做同样的任务更加简单和高效了,可以做更难的任务了,但不等于任务也会发生变化。所以,一定要理清楚这里面的关键。

  不过如果你是新开发,或者不了解这方面的专业知识,那就另当别论了,使用LLM的API反而可能是更好的策略。但即便如此,实际上线前还是应该考虑清楚各种细节,比如服务不可用怎么办,并发大概多少,时延要求多少,用户规模大概多少等等。我们技术方案的选型是和公司或自己的需求息息相关的,没有绝对好的方案,只有当下是否适合的方案。同时,要尽可能多考虑几步,但也不用太多(时刻谨记:「过度优化是原罪」),比如你用户只有不到1万,上来就搞个分布式的设计方案就有点坑了。但这并不妨碍你在代码和架构设计时考虑扩展性,比如数据库,我们可能用SQLite,但你代码里可不能直接和它耦合死,可以使用能同时支持其他数据库,甚至分布式数据库的ORM工具。这样虽然写起来稍微麻烦了一点点(真的是一点点),但你的代码更加清晰,而且和可能会变化的东西解耦了。这样如果日后规模上去了,数据库可以随便换,代码基本不用动。

  最后,我们也应该了解ChatGPT的一些局限,除了它本身的局限(这块内容可以参考后面专门讲缺陷的章节),在工程上始终应该关注下面几个话题:

  • 响应时间和稳定性

  • 并发和横向可扩展性

  • 可维护性和迭代

  • 成本

  只有当这些都能满足你的期望时,才应该选择。始终记住,人才是关键,不要被任何工具绑架。

注①:关于In-Context方式和微调方式的通俗解释
  In-Context主要是利用ChatGPT的理解能力,把它当做超级大脑,我们把相关上下文给它,让它根据上下文回答问题,就类似前面的《文档问答》。对于不确定的问题,还可以设计兜底话术。
  微调方式则是直接将自定义数据喂给ChatGPT的微调接口(现在没有开放,但理论上可行),让它学习这些自定义内容,之后直接问就好了,就像我们现在直接问它「中国的首都是哪里」,它可以正确回答一样。

  下面,我们一起使用ChatGPT来实现一个对话机器人。设计阶段首先至少需要考虑以下一些因素(这并不包括上面提到的那些):

  • 使用目的

  • 如何使用

  • 消息查询、存储

  • 消息解析

  • 实时干预

  • 更新策略

  首先,咱们需要明确使用目的是什么,如上所言,不同的用途咱们要考虑的因素也不一样。简单(但很实际)起见,我们以一个「订餐机器人」为例,简单的开场白后获取用户联系方式、订餐人数、用餐时间三个信息。

  使用也比较简单,主要利用ChatGPT的多轮对话能力即可,这里的重点是控制上下文。不过由于任务简单,我们不用对历史记录做召回再进行对话,直接在每一轮时把已经获取的信息告诉它,同时让它继续获取其他信息,直到所有信息获取完毕为止。另外,我们可以限制一下输出Token的数量(输出文本的长度)。

  对于用户的消息(以及机器人的回复),实际中往往需要存储起来,用来做每一轮回复的历史消息召回。而且这个日后还可能有其他用途,比如使用对话记录对用户进行画像,或者用来当做训练数据等等。存储可以直接放到数据库,或传到类似ElasticSearch这样的内部搜索引擎中。

  消息的解析可以实时进行(并不一定要用ChatGPT)或离线进行,本案例我们需要实时在线解析。这个过程我们可以让ChatGPT在生成回复时顺便做掉。

  实时干预是应该要关注的,或者需要设计这样的模块。一方面是回复内容有时候即便做了限制,依然有可能被某些问法问到不太合适的答复;另一方面也不能排除部分恶意用户对机器人进行攻击,因此最好有干预机制的设计。这里,我们设计一个简单策略:检测用户是否提问敏感类问题,如果发现此类问题直接返回设定好的文本,不需要调用ChatGPT进行对话回复。

  更新策略主要是对企业知识库的更新,这里由于我们使用的是In-Context能力,所以并不需要调整ChatGPT,可能需要调整Embedding接口(目前openai不支持)。此案例暂不涉及。

  综上,我们需要先对用户输入进行敏感性检查,没问题后开始对话。同时应存储用户消息,并在每轮对话时将用户历史消息传递给接口。

In [20]:

from openai import OpenAI
openai_client = OpenAI()
OPENAI_API_KEY = "填入专属的API key"

api_key = OPENAI_API_KEY
/usr/local/lib/python3.8/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version!
  warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

  先看一下敏感性检查,这个接口比较多,openai提供了一个相关的接口,国内几大厂商也有相关API。这个本身是和对话无关的。我们以openai接口为例。

In [197]:

import requests

def check_risk(inp: str) -> bool:
    safe_api = "https://api.openai.com/v1/moderations"
    resp = requests.post(safe_api, json={"input": inp}, headers={"Authorization": f"Bearer {OPENAI_API_KEY}"})
    data = resp.json()
    return data["results"][0]["flagged"]

In [198]:

check_risk("good")

Out[198]:

False

  接下来我们考虑如何构造接口的输入,这里有两个事情要做:

  1. 查询历史对话记录作为上下文,简单起见我们可以只考虑上一轮。

  2. 计算输入的Token数,根据模型能接受最大Token长度和想输出的最大长度,反推上下文的最大长度,并对历史对话进行处理(如截断)。

In [243]:

from dataclasses import dataclass, asdict
from typing import List, Dict
from datetime import datetime
import uuid
import json
import re

In [118]:

@dataclass
class User:
    
    user_id: str
    user_name: str

In [138]:

@dataclass
class ChatSession:
    
    user_id: str
    session_id: str
    cellphone: str
    people_number: int
    meal_time: str
    chat_at: datetime

@dataclass
class ChatRecord:
    
    user_id: str
    session_id: str
    user_input: str
    bot_output: str
    chat_at: datetime

  上面我们首先设计了两个简单的数据结构,一个是聊天信息,一个是聊天记录,前者记录聊天基本信息,后者记录聊天记录。其中,session_id主要用来区分每一次对话,当用户点击产品页面的「开始对话」之类的按钮后,就生成一个session_id;在下次对话时再生成一个新的。

  接下来,我们处理核心对话逻辑,这一块主要是利用ChatGPT的能力,明确要求,把每一轮对话都喂给它。给出响应。

In [279]:

def ask(msg):
    response = openai_client.chat.completions.create(
        model="gpt-3.5-turbo", 
        temperature=0.2,
        max_tokens=100,
        top_p=1,
        frequency_penalty=0,
        presence_penalty=0,
        messages=msg
    )
    ans = response.choices[0].message.content
    return ans

In [ ]:

!pip install sqlalchemy

In [284]:

from sqlalchemy import insert

class Chatbot:
    
    def __init__(self):
        self.system_inp = """现在你是一个订餐机器人(角色是assistant),你的目的是向用户获取手机号码、用餐人数量和用餐时间三个信息。你可以自由回复用户消息,但牢记你的目的。每一轮你需要输出给用户的回复,以及获取到的信息,信息应该以JSON方式存储,包括三个key:cellphone表示手机号码,people_number表示用餐人数,meal_time表示用餐时间储。

回复格式:
给用户的回复:{回复给用户的话}
获取到的信息:{"cellphone": null, "people_number": null, "meal_time": null}
"""
        self.max_round = 10
        self.slot_labels = ["meal_time", "people_number", "cellphone"]
        self.reg_msg = re.compile(r"\n+")


    def check_over(self, slot_dict: dict):
        for label in self.slot_labels:
            if slot_dict.get(label) is None:
                return False
        return True
    
    def send_msg(self, msg: str):
        print(f"机器人:{msg}")
    
    def chat(self, user_id: str):
        sess_id = uuid.uuid4().hex
        chat_at = datetime.now()
        msg = [
            {"role": "system", "content": self.system_inp},
        ]
        n_round = 0
        
        history = []
        while True:
            if n_round > self.max_round:
                bot_msg = "非常感谢您对我们的支持,再见。"
                self.send_msg(bot_msg)
                break
            
            try:
                bot_inp = ask(msg)
            except Exception as e:
                bot_msg = "机器人出错,稍后将由人工与您联系,谢谢。"
                self.send_msg(bot_msg)
                break
            
            tmp = self.reg_msg.split(bot_inp)
            bot_msg = tmp[0].strip("给用户的回复:")
            self.send_msg(bot_msg)
            if len(tmp) > 1:
                slot_str = tmp[1].strip("获取到的信息:")
                slot = json.loads(slot_str)
                print(f"\tslot: {slot}")
            n_round += 1
            
            if self.check_over(slot):
                break

            user_inp = input()
            
            msg += [
                {"role": "assistant", "content": bot_inp},
                {"role": "user", "content": user_inp},
            ]
            
            record = ChatRecord(user_id, sess_id, bot_inp, user_inp, datetime.now())
            history.append(record)
            
            if check_risk(user_inp):
                break
        
        chat_sess = ChatSession(user_id, sess_id, **slot, chat_at=chat_at)
        self.store(history, chat_sess)
    
    
    def store(self, history: List[ChatRecord], chat: ChatSession):
        with SessionLocal.begin() as sess:
            q = insert(
                chat_record_table
            ).values(
                [asdict(v) for v in history]
            )
            sess.execute(q)
        with SessionLocal.begin() as sess:
            q = insert(
                chat_session_table
            ).values(
                [asdict(chat)]
            )
            sess.execute(q)

  接下来,我们把两张表建好(User表这里就不建了)。注意:建表只要一次。

In [285]:

from sqlalchemy import Table, Column, Integer, String, DateTime, Text, MetaData, SmallInteger
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import os


db_file = "chatbot.db"

if os.path.exists(db_file):
    os.remove(db_file)

engine = create_engine(f"sqlite:///{db_file}")
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

metadata_obj = MetaData()

chat_record_table = Table(
    "chat_record_table",
    metadata_obj,
    Column("id", Integer, primary_key=True),
    Column("user_id", String(64), index=True),
    Column("session_id", String(64), index=True),
    Column("user_input", Text),
    Column("bot_output", Text),
    Column("chat_at", DateTime),
)

chat_session_table = Table(
    "chat_session_table",
    metadata_obj,
    Column("id", Integer, primary_key=True),
    Column("user_id", String(64), index=True),
    Column("session_id", String(64), index=True),
    Column("cellphone", String(16)),
    Column("people_number", SmallInteger),
    Column("meal_time", String(32)),
    Column("chat_at", DateTime),
)

metadata_obj.create_all(engine, checkfirst=True)

In [286]:

!ls -la ./chatbot.db
-rw-r--r--  1 Yam  staff  28672 Apr  8 00:00 ./chatbot.db

  现在我们进行简单的尝试:

In [ ]:

!pip install pnlp

In [287]:

import pnlp
nick = "长琴"
user = User(pnlp.generate_uuid(nick), nick)
chatbot = Chatbot()
chatbot.chat(user.user_id)
机器人:请问您的手机号码是多少呢?
	slot: {'cellphone': None, 'people_number': None, 'meal_time': None}
我的手机是13788889999
机器人:好的,您的手机号码是13788889999,请问用餐人数是几位呢?
	slot: {'cellphone': '13788889999', 'people_number': None, 'meal_time': None}
我们一共五个人
机器人:好的,您们一共五个人,最后,请问您们的用餐时间是什么时候呢?
	slot: {'cellphone': '13788889999', 'people_number': 5, 'meal_time': None}
稍等我问一下啊
机器人:好的,没问题,我等您的消息。
好了,明天下午7点,谢谢
机器人:好的,您们的用餐时间是明天下午7点,我们已经为您记录好了,请问还有其他需要帮助的吗?
	slot: {'cellphone': '13788889999', 'people_number': 5, 'meal_time': '明天下午7点'}

  查表看看刚刚的记录:

In [288]:

import sqlite3

In [289]:

def query_table(table: str):
    con = sqlite3.connect("chatbot.db")
    cur = con.cursor()
    q = cur.execute(f"SELECT * FROM {table}")
    return q.fetchall()

In [290]:

query_table("chat_session_table")

Out[290]:

[(1,
  'dc3be3b3516555d3b0b6a77a1d9c7e82',
  '05a88a8e3db8490eacf14b8bb9800fcc',
  '13788889999',
  5,
  '明天下午7点',
  '2023-04-08 00:00:34.618232')]

In [291]:

query_table("chat_record_table")

Out[291]:

[(1,
  'dc3be3b3516555d3b0b6a77a1d9c7e82',
  '05a88a8e3db8490eacf14b8bb9800fcc',
  '给用户的回复:请问您的手机号码是多少呢?\n获取到的信息:{"cellphone": null, "people_number": null, "meal_time": null}',
  '我的手机是13788889999',
  '2023-04-08 00:00:47.498172'),
 (2,
  'dc3be3b3516555d3b0b6a77a1d9c7e82',
  '05a88a8e3db8490eacf14b8bb9800fcc',
  '给用户的回复:好的,您的手机号码是13788889999,请问用餐人数是几位呢?\n获取到的信息:{"cellphone": "13788889999", "people_number": null, "meal_time": null}',
  '我们一共五个人',
  '2023-04-08 00:01:18.694161'),
 (3,
  'dc3be3b3516555d3b0b6a77a1d9c7e82',
  '05a88a8e3db8490eacf14b8bb9800fcc',
  '给用户的回复:好的,您们一共五个人,最后,请问您们的用餐时间是什么时候呢?\n获取到的信息:{"cellphone": "13788889999", "people_number": 5, "meal_time": null}',
  '稍等我问一下啊',
  '2023-04-08 00:01:40.296970'),
 (4,
  'dc3be3b3516555d3b0b6a77a1d9c7e82',
  '05a88a8e3db8490eacf14b8bb9800fcc',
  '好的,没问题,我等您的消息。',
  '好了,明天下午7点,谢谢',
  '2023-04-08 00:02:15.839735')]

  上面我们实现了一个非常简易的任务机器人,虽然简易,但我们其实很容易就能发现,NLU、DM和NLG三个模块已经完全不需要了。唯一的不足可能是接口反应有点慢,从对话来看其实并没有太多问题。

  另外,需要再次对几个问题进行强调,以便大家可以更好地构建应用。

  第一点,当要支持的对话轮次非常多时(比如培训、面试这样的场景),则需要实时将每一轮的对话索引起来,每一轮先召回所有历史对话中相关的topN轮作为上下文(正如我们在文档问答中那样)。然后让ChatGPT根据这些上下文对用户进行回复。这样理论上我们是可以支持无限轮的。召回的过程其实就是一个回忆的过程,这里可以优化的点或者说想象的空间很大。

  第二点,在传递message参数给ChatGPT时,由于有长度限制,有时候上下文中遇到特别长回复那种轮次,可能会导致只能传几轮(甚至一轮就耗光长度了)。根据ChatGPT自己的说法:当历史记录非常长时,我们确实可能只能利用其中的一小部分来生成回答。为了应对这种情况,通常会使用一些技术来选择最相关的历史记录,以便在生成回答时使用。例如,可能会使用一些关键词提取技术,识别出历史记录中最相关的信息,并将其与当前的输入一起使用。

  此外,还可能会使用一些摘要技术来对历史记录进行压缩和精简,以便在生成回答时只使用最重要的信息。另外,还可以使用一些记忆机制,例如注意力机制,以便在历史记录中选择最相关的信息。虽然这些技术可以帮助模型在历史记录很长时选择最相关的信息,但在某些情况下,历史记录仍然可能过于复杂,导致模型难以正确理解和处理。在这种情况下,可能需要使用其他技术来限制历史记录的长度或提供其他方面的辅助信息,以便模型可以更好地理解和回答用户的问题。

  另外,根据ChatGPT的说法,在生成回复时,它也会使用一些技术来限制输出长度,例如截断输出或者使用一些策略来生成更加简洁的回答。当然,用户也可以使用特定的输入限制或规则来帮助缩短回答。总之,尽可能地在输出长度和回答质量之间进行平衡。

  第三点,充分考虑安全性,根据实际情况合理设计架构(但不要过度设计)。

  最后,值得一提的是,上面只是利用了ChatGPT的一丢丢功能,大家完全可以结合自己的业务,或者大开脑洞,开发更多有用、有趣的产品和应用。

相关文献

memo: