Browse Source

moddify loader annotation

tags/v0.5.0
Yige Xu 5 years ago
parent
commit
0d0bd0ac3d
4 changed files with 199 additions and 85 deletions
  1. +51
    -27
      fastNLP/io/loader/classification.py
  2. +61
    -22
      fastNLP/io/loader/conll.py
  3. +12
    -6
      fastNLP/io/loader/coreference.py
  4. +75
    -30
      fastNLP/io/loader/matching.py

+ 51
- 27
fastNLP/io/loader/classification.py View File

@@ -163,14 +163,21 @@ class YelpPolarityLoader(YelpLoader):


class IMDBLoader(Loader): class IMDBLoader(Loader):
""" """
原始数据中内容应该为, 每一行为一个sample,制表符之前为target,制表符之后为文本内容。

Example::

neg Alan Rickman & Emma...
neg I have seen this...

IMDBLoader读取后的数据将具有以下两列内容: raw_words: str, 需要分类的文本; target: str, 文本的标签 IMDBLoader读取后的数据将具有以下两列内容: raw_words: str, 需要分类的文本; target: str, 文本的标签
DataSet具备以下的结构:
读取的DataSet具备以下的结构:


.. csv-table:: .. csv-table::
:header: "raw_words", "target" :header: "raw_words", "target"


"Bromwell High is a cartoon ... ", "pos"
"Story of a man who has ...", "neg"
"Alan Rickman & Emma... ", "neg"
"I have seen this... ", "neg"
"...", "..." "...", "..."


""" """
@@ -241,13 +248,20 @@ class IMDBLoader(Loader):


class SSTLoader(Loader): class SSTLoader(Loader):
""" """
原始数据中内容应该为:

Example::

(2 (3 (3 Effective) (2 but)) (1 (1 too-tepid)...
(3 (3 (2 If) (3 (2 you) (3 (2 sometimes)...

读取之后的DataSet具有以下的结构 读取之后的DataSet具有以下的结构


.. csv-table:: 下面是使用SSTLoader读取的DataSet所具备的field .. csv-table:: 下面是使用SSTLoader读取的DataSet所具备的field
:header: "raw_words" :header: "raw_words"


"(3 (2 It) (4 (4 (2 's) (4 (3 (2 a)..."
"(4 (4 (2 Offers) (3 (3 (2 that) (3 (3 rare)..."
"(2 (3 (3 Effective) (2 but)) (1 (1 too-tepid)..."
"(3 (3 (2 If) (3 (2 you) (3 (2 sometimes) ..."
"..." "..."


raw_words列是str。 raw_words列是str。
@@ -286,14 +300,21 @@ class SSTLoader(Loader):


class SST2Loader(Loader): class SST2Loader(Loader):
""" """
数据SST2的Loader
原始数据中内容为:第一行为标题(具体内容会被忽略),之后一行为一个sample,第一个制表符之前被认为是句子,第一个制表符之后认为是label

Example::

sentence label
it 's a charming and often affecting journey . 1
unflinchingly bleak and desperate 0

读取之后DataSet将如下所示 读取之后DataSet将如下所示


.. csv-table:: .. csv-table::
:header: "raw_words", "target" :header: "raw_words", "target"


"it 's a charming and often affecting...", "1"
"unflinchingly bleak and...", "0"
"it 's a charming and often affecting journey .", "1"
"unflinchingly bleak and desperate", "0"
"..." "..."


test的DataSet没有target列。 test的DataSet没有target列。
@@ -351,18 +372,17 @@ class ChnSentiCorpLoader(Loader):


Example:: Example::


label raw_chars
1 這間酒店環境和服務態度亦算不錯,但房間空間太小~~
1 <荐书> 推荐所有喜欢<红楼>的红迷们一定要收藏这本书,要知道...
0 商品的不足暂时还没发现,京东的订单处理速度实在.......周二就打包完成,周五才发货...
label text_a
1 基金痛所有投资项目一样,必须先要有所了解...
1 系统很好装,LED屏是不错,就是16比9的比例...


读取后的DataSet具有以下的field 读取后的DataSet具有以下的field


.. csv-table:: .. csv-table::
:header: "raw_chars", "target" :header: "raw_chars", "target"


"這間酒店環境和服務態度亦算不錯,但房間空間太小~~", "1"
"<荐书> 推荐所有喜欢<红楼>...", "1"
"基金痛所有投资项目一样,必须先要有所了解...", "1"
"系统很好装,LED屏是不错,就是16比9的比例...", "1"
"..." "..."


""" """
@@ -402,15 +422,19 @@ class ChnSentiCorpLoader(Loader):


class THUCNewsLoader(Loader): class THUCNewsLoader(Loader):
""" """
别名:
数据集简介:document-level分类任务,新闻10分类 数据集简介:document-level分类任务,新闻10分类
原始数据内容为:每行一个sample,第一个'\t'之前为target,第一个'\t'之后为raw_words 原始数据内容为:每行一个sample,第一个'\t'之前为target,第一个'\t'之后为raw_words

Example::

体育 调查-您如何评价热火客场胜绿军总分3-1夺赛点?...

读取后的Dataset将具有以下数据结构: 读取后的Dataset将具有以下数据结构:


.. csv-table:: .. csv-table::
:header: "raw_words", "target" :header: "raw_words", "target"
"马晓旭意外受伤让国奥警惕 无奈大雨格外青睐殷家军记者傅亚雨沈阳报道 ... ", "体育"
"调查-您如何评价热火客场胜绿军总分3-1夺赛点?...", "体育"
"...", "..." "...", "..."


""" """
@@ -446,21 +470,21 @@ class WeiboSenti100kLoader(Loader):
""" """
别名: 别名:
数据集简介:微博sentiment classification,二分类 数据集简介:微博sentiment classification,二分类
原始数据内容为:
.. .. code-block:: text
label text
0 六一出生的?好讽刺…… //@祭春姬:他爸爸是外星人吧 //@面孔小高:现在的孩子都怎么了 [怒][怒][怒]
1 听过一场!笑死了昂,一听茄子脱口秀,从此节操是路人![嘻嘻] //@中国梦网官微:@Pencil彭赛 @茄子脱口秀 [圣诞帽][圣诞树][平安果]

Example::

label text
1 多谢小莲,好运满满[爱你]
1 能在他乡遇老友真不赖,哈哈,珠儿,我也要用...

读取后的Dataset将具有以下数据结构: 读取后的Dataset将具有以下数据结构:


.. csv-table:: .. csv-table::
:header: "raw_chars", "target"
:header: "raw_chars", "target"
"六一出生的?好讽刺…… //@祭春姬:他爸爸是外星人吧 //@面孔小高:现在的孩子都怎么了 [怒][怒][怒]", "0"
"...", "..."
"多谢小莲,好运满满[爱你]", "1"
"能在他乡遇老友真不赖,哈哈,珠儿,我也要用...", "1"
"...", "..."


""" """




+ 61
- 22
fastNLP/io/loader/conll.py View File

@@ -213,13 +213,21 @@ class OntoNotesNERLoader(ConllLoader):
用以读取OntoNotes的NER数据,同时也是Conll2012的NER任务数据。将OntoNote数据处理为conll格式的过程可以参考 用以读取OntoNotes的NER数据,同时也是Conll2012的NER任务数据。将OntoNote数据处理为conll格式的过程可以参考
https://github.com/yhcc/OntoNotes-5.0-NER。OntoNoteNERLoader将取第4列和第11列的内容。 https://github.com/yhcc/OntoNotes-5.0-NER。OntoNoteNERLoader将取第4列和第11列的内容。


读取的数据格式为:

Example::

bc/msnbc/00/msnbc_0000 0 0 Hi UH (TOP(FRAG(INTJ*) - - - Dan_Abrams * -
bc/msnbc/00/msnbc_0000 0 1 everyone NN (NP*) - - - Dan_Abrams * -
...

返回的DataSet的内容为 返回的DataSet的内容为


.. csv-table:: .. csv-table::
:header: "raw_words", "target" :header: "raw_words", "target"


"[Nadim, Ladki]", "[B-PER, I-PER]"
"[AL-AIN, United, Arab, ...]", "[B-LOC, B-LOC, I-LOC, ...]"
"['Hi', 'everyone', '.']", "['O', 'O', 'O']"
"['first', 'up', 'on', 'the', 'docket'], "['O', 'O', 'O', 'O', 'O']"
"[...]", "[...]" "[...]", "[...]"


""" """
@@ -375,13 +383,22 @@ class MsraNERLoader(CNNERLoader):


Example:: Example::


我 O
们 O
变 O
而 O
以 O
书 O
会 O
把 O
欧 B-LOC

美 B-LOC
、 O

港 B-LOC
台 B-LOC

流 O
行 O

的 O

食 O

... ...


读取后的DataSet包含以下的field 读取后的DataSet包含以下的field
@@ -389,8 +406,8 @@ class MsraNERLoader(CNNERLoader):
.. csv-table:: .. csv-table::
:header: "raw_chars", "target" :header: "raw_chars", "target"


"[我, 们, 变...]", "[O, O, ...]"
"[中, 共, 中, ...]", "[B-ORG, I-ORG, I-ORG, ...]"
"['把', '欧'] ", "['O', 'B-LOC']"
"['美', '、']", "['B-LOC', 'O']"
"[...]", "[...]" "[...]", "[...]"


""" """
@@ -449,6 +466,30 @@ class MsraNERLoader(CNNERLoader):




class WeiboNERLoader(CNNERLoader): class WeiboNERLoader(CNNERLoader):
"""
读取WeiboNER数据,数据中的格式应该类似与下列的内容

Example::

老 B-PER.NOM
百 I-PER.NOM
姓 I-PER.NOM

心 O

...

读取后的DataSet包含以下的field

.. csv-table::

:header: "raw_chars", "target"

"['老', '百', '姓']", "['B-PER.NOM', 'I-PER.NOM', 'I-PER.NOM']"
"['心']", "['O']"
"[...]", "[...]"

"""
def __init__(self): def __init__(self):
super().__init__() super().__init__()
@@ -471,23 +512,21 @@ class PeopleDailyNERLoader(CNNERLoader):


Example:: Example::


当 O
希 O
望 O
工 O
程 O
救 O
助 O
的 O
百 O
中 B-ORG
共 I-ORG
中 I-ORG
央 I-ORG

致 O
中 B-ORG
...


读取后的DataSet包含以下的field 读取后的DataSet包含以下的field


.. csv-table:: target列是基于BIO的编码方式 .. csv-table:: target列是基于BIO的编码方式
:header: "raw_chars", "target" :header: "raw_chars", "target"


"[我, 们, 变...]", "[O, O, ...]"
"[中, 共, 中, ...]", "[B-ORG, I-ORG, I-ORG, ...]"
"['中', '共', '中', '央']", "['B-ORG', 'I-ORG', 'I-ORG', 'I-ORG']"
"[...]", "[...]" "[...]", "[...]"


""" """


+ 12
- 6
fastNLP/io/loader/coreference.py View File

@@ -17,13 +17,19 @@ class CoReferenceLoader(JsonLoader):


Example:: Example::


{"doc_key":"bc/cctv/00/cctv_001",
"speakers":"[["Speaker1","Speaker1","Speaker1"],["Speaker1","Speaker1","Speaker1"]]",
"clusters":"[[[2,3],[4,5]],[7,8],[18,20]]]",
"sentences":[["I","have","an","apple"],["It","is","good"]]
}
{"doc_key": "bc/cctv/00/cctv_0000_0",
"speakers": [["Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1"], ["Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1"], ["Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1", "Speaker#1"]],
"clusters": [[[70, 70], [485, 486], [500, 500], [73, 73], [55, 55], [153, 154], [366, 366]]],
"sentences": [["In", "the", "summer", "of", "2005", ",", "a", "picture", "that", "people", "have", "long", "been", "looking", "forward", "to", "started", "emerging", "with", "frequency", "in", "various", "major", "Hong", "Kong", "media", "."], ["With", "their", "unique", "charm", ",", "these", "well", "-", "known", "cartoon", "images", "once", "again", "caused", "Hong", "Kong", "to", "be", "a", "focus", "of", "worldwide", "attention", "."]]
}


读取预处理好的Conll2012数据。
读取预处理好的Conll2012数据,数据结构如下:

.. csv-table::
:header: "raw_words1", "raw_words2", "raw_words3", "raw_words4"

"bc/cctv/00/cctv_0000_0", "[['Speaker#1', 'Speaker#1', 'Speaker#1...", "[[[70, 70], [485, 486], [500, 500], [7...", "[['In', 'the', 'summer', 'of', '2005',..."
"...", "...", "...", "..."


""" """
def __init__(self, fields=None, dropna=False): def __init__(self, fields=None, dropna=False):


+ 75
- 30
fastNLP/io/loader/matching.py View File

@@ -27,15 +27,24 @@ from ...core.instance import Instance


class MNLILoader(Loader): class MNLILoader(Loader):
""" """
读取的数据格式为:

Example::

index promptID pairID genre sentence1_binary_parse sentence2_binary_parse sentence1_parse sentence2_parse sentence1 sentence2 label1 gold_label
0 31193 31193n government ( ( Conceptually ( cream skimming ) ) ...
1 101457 101457e telephone ( you ( ( know ( during ( ( ( the season ) and ) ( i guess ) ) )...
...

读取MNLI任务的数据,读取之后的DataSet中包含以下的内容,words0是sentence1, words1是sentence2, target是gold_label, 测试集中没 读取MNLI任务的数据,读取之后的DataSet中包含以下的内容,words0是sentence1, words1是sentence2, target是gold_label, 测试集中没
有target列。 有target列。


.. csv-table:: .. csv-table::
:header: "raw_words1", "raw_words2", "target" :header: "raw_words1", "raw_words2", "target"


"The new rights are...", "Everyone really likes..", "neutral"
"This site includes a...", "The Government Executive...", "contradiction"
"...", "...","."
"Conceptually cream ...", "Product and geography...", "neutral"
"you know during the ...", "You lose the things to the...", "entailment"
"...", "...", "..."


""" """
@@ -113,14 +122,28 @@ class MNLILoader(Loader):


class SNLILoader(JsonLoader): class SNLILoader(JsonLoader):
""" """
文件每一行是一个sample,每一行都为一个json对象,其数据格式为:

Example::

{"annotator_labels": ["neutral", "entailment", "neutral", "neutral", "neutral"], "captionID": "4705552913.jpg#2",
"gold_label": "neutral", "pairID": "4705552913.jpg#2r1n",
"sentence1": "Two women are embracing while holding to go packages.",
"sentence1_binary_parse": "( ( Two women ) ( ( are ( embracing ( while ( holding ( to ( go packages ) ) ) ) ) ) . ) )",
"sentence1_parse": "(ROOT (S (NP (CD Two) (NNS women)) (VP (VBP are) (VP (VBG embracing) (SBAR (IN while) (S (NP (VBG holding)) (VP (TO to) (VP (VB go) (NP (NNS packages)))))))) (. .)))",
"sentence2": "The sisters are hugging goodbye while holding to go packages after just eating lunch.",
"sentence2_binary_parse": "( ( The sisters ) ( ( are ( ( hugging goodbye ) ( while ( holding ( to ( ( go packages ) ( after ( just ( eating lunch ) ) ) ) ) ) ) ) ) . ) )",
"sentence2_parse": "(ROOT (S (NP (DT The) (NNS sisters)) (VP (VBP are) (VP (VBG hugging) (NP (UH goodbye)) (PP (IN while) (S (VP (VBG holding) (S (VP (TO to) (VP (VB go) (NP (NNS packages)) (PP (IN after) (S (ADVP (RB just)) (VP (VBG eating) (NP (NN lunch))))))))))))) (. .)))"
}

读取之后的DataSet中的field情况为 读取之后的DataSet中的field情况为


.. csv-table:: 下面是使用SNLILoader加载的DataSet所具备的field .. csv-table:: 下面是使用SNLILoader加载的DataSet所具备的field
:header: "raw_words1", "raw_words2", "target"
:header: "target", "raw_words1", "raw_words2",


"The new rights are...", "Everyone really likes..", "neutral"
"This site includes a...", "The Government Executive...", "entailment"
"...", "...", "."
"neutral ", "Two women are embracing while holding..", "The sisters are hugging goodbye..."
"entailment", "Two women are embracing while holding...", "Two woman are holding packages."
"...", "...", "..."


""" """
@@ -174,6 +197,13 @@ class SNLILoader(JsonLoader):


class QNLILoader(JsonLoader): class QNLILoader(JsonLoader):
""" """
第一行为标题(具体内容会被忽略),之后每一行是一个sample,由index、问题、句子和标签构成(以制表符分割),数据结构如下:

Example::

index question sentence label
0 What came into force after the new constitution was herald? As of that day, the new constitution heralding the Second Republic came into force. entailment

QNLI数据集的Loader, QNLI数据集的Loader,
加载的DataSet将具备以下的field, raw_words1是question, raw_words2是sentence, target是label 加载的DataSet将具备以下的field, raw_words1是question, raw_words2是sentence, target是label


@@ -181,7 +211,6 @@ class QNLILoader(JsonLoader):
:header: "raw_words1", "raw_words2", "target" :header: "raw_words1", "raw_words2", "target"


"What came into force after the new...", "As of that day...", "entailment" "What came into force after the new...", "As of that day...", "entailment"
"What is the first major...", "The most important tributaries", "not_entailment"
"...","." "...","."


test数据集没有target列 test数据集没有target列
@@ -231,6 +260,13 @@ class QNLILoader(JsonLoader):


class RTELoader(Loader): class RTELoader(Loader):
""" """
第一行为标题(具体内容会被忽略),之后每一行是一个sample,由index、句子1、句子2和标签构成(以制表符分割),数据结构如下:

Example::

index sentence1 sentence2 label
0 Dana Reeve, the widow of the actor Christopher Reeve, has died of lung cancer at age 44, according to the Christopher Reeve Foundation. Christopher Reeve had an accident. not_entailment

RTE数据的loader RTE数据的loader
加载的DataSet将具备以下的field, raw_words1是sentence0,raw_words2是sentence1, target是label 加载的DataSet将具备以下的field, raw_words1是sentence0,raw_words2是sentence1, target是label


@@ -238,8 +274,7 @@ class RTELoader(Loader):
:header: "raw_words1", "raw_words2", "target" :header: "raw_words1", "raw_words2", "target"


"Dana Reeve, the widow of the actor...", "Christopher Reeve had an...", "not_entailment" "Dana Reeve, the widow of the actor...", "Christopher Reeve had an...", "not_entailment"
"Yet, we now are discovering that...", "Bacteria is winning...", "entailment"
"...","."
"...","..."


test数据集没有target列 test数据集没有target列
""" """
@@ -294,7 +329,7 @@ class QuoraLoader(Loader):
Example:: Example::


1 How do I get funding for my web based startup idea ? How do I get seed funding pre product ? 327970 1 How do I get funding for my web based startup idea ? How do I get seed funding pre product ? 327970
1 How can I stop my depression ? What can I do to stop being depressed ? 339556
0 Is honey a viable alternative to sugar for diabetics ? How would you compare the United States ' euthanasia laws to Denmark ? 90348
... ...


加载的DataSet将具备以下的field 加载的DataSet将具备以下的field
@@ -302,9 +337,9 @@ class QuoraLoader(Loader):
.. csv-table:: .. csv-table::
:header: "raw_words1", "raw_words2", "target" :header: "raw_words1", "raw_words2", "target"


"What should I do to avoid...", "1"
"How do I not sleep in a boring class...", "0"
"...","."
"How do I get funding for my web based...", "How do I get seed funding...","1"
"Is honey a viable alternative ...", "How would you compare the United...","0"
"...","...","..."


""" """
@@ -339,17 +374,21 @@ class QuoraLoader(Loader):


class CNXNLILoader(Loader): class CNXNLILoader(Loader):
""" """
别名:
数据集简介:中文句对NLI(本为multi-lingual的数据集,但是这里只取了中文的数据集)。原句子已被MOSES tokenizer处理,这里我们将其还原并重新按字tokenize 数据集简介:中文句对NLI(本为multi-lingual的数据集,但是这里只取了中文的数据集)。原句子已被MOSES tokenizer处理,这里我们将其还原并重新按字tokenize
原始数据为:
train中的数据包括premise,hypo和label三个field
原始数据数据为:

Example::

premise hypo label
我们 家里 有 一个 但 我 没 找到 我 可以 用 的 时间 我们 家里 有 一个 但 我 从来 没有 时间 使用 它 . entailment

dev和test中的数据为csv或json格式,包括十多个field,这里只取与以上三个field中的数据 dev和test中的数据为csv或json格式,包括十多个field,这里只取与以上三个field中的数据
读取后的Dataset将具有以下数据结构: 读取后的Dataset将具有以下数据结构:


.. csv-table:: .. csv-table::
:header: "raw_chars1", "raw_chars2", "target" :header: "raw_chars1", "raw_chars2", "target"
"从概念上看,奶油收入有两个基本方面产品和地理.", "产品和地理是什么使奶油抹霜工作.", "1"
"我们 家里 有 一个 但 我 没 找到 我 可以 用 的 时间", "我们 家里 有 一个 但 我 从来 没有 时间 使用 它 .", "0"
"...", "...", "..." "...", "...", "..."


""" """
@@ -432,16 +471,21 @@ class BQCorpusLoader(Loader):
""" """
别名: 别名:
数据集简介:句子对二分类任务(判断是否具有相同的语义) 数据集简介:句子对二分类任务(判断是否具有相同的语义)
原始数据内容为:
每行一个sample,第一个','之前为text1,第二个','之前为text2,第二个','之后为target
第一行为sentence1 sentence2 label
原始数据结构为:

Example::

sentence1,sentence2,label
综合评分不足什么原因,综合评估的依据,0
什么时候我能使用微粒贷,你就赶快给我开通就行了,0

读取后的Dataset将具有以下数据结构: 读取后的Dataset将具有以下数据结构:


.. csv-table:: .. csv-table::
:header: "raw_chars1", "raw_chars2", "target" :header: "raw_chars1", "raw_chars2", "target"
"不是邀请的如何贷款?", "我不是你们邀请的客人可以贷款吗?", "1"
"如何满足微粒银行的审核", "建设银行有微粒贷的资格吗", "0"
"综合评分不足什么原因", "综合评估的依据", "0"
"什么时候我能使用微粒贷", "你就赶快给我开通就行了", "0"
"...", "...", "..." "...", "...", "..."


""" """
@@ -480,19 +524,20 @@ class LCQMCLoader(Loader):
数据集简介:句对匹配(question matching) 数据集简介:句对匹配(question matching)
原始数据为: 原始数据为:
.. code-block:: text
'喜欢打篮球的男生喜欢什么样的女生\t爱打篮球的男生喜欢什么样的女生\t1\n'
'晚上睡觉带着耳机听音乐有什么害处吗?\t孕妇可以戴耳机听音乐吗?\t0\n'

Example::

喜欢打篮球的男生喜欢什么样的女生 爱打篮球的男生喜欢什么样的女生 1
你帮我设计小说的封面吧 谁能帮我给小说设计个封面? 0

读取后的Dataset将具有以下的数据结构 读取后的Dataset将具有以下的数据结构
.. csv-table:: .. csv-table::
:header: "raw_chars1", "raw_chars2", "target" :header: "raw_chars1", "raw_chars2", "target"
"喜欢打篮球的男生喜欢什么样的女生", "爱打篮球的男生喜欢什么样的女生", "1"
"晚上睡觉带着耳机听音乐有什么害处吗?", "妇可以戴耳机听音乐吗?", "0"
"喜欢打篮球的男生喜欢什么样的女生", "爱打篮球的男生喜欢什么样的女生", "1"
"你帮我设计小说的封面吧", "妇可以戴耳机听音乐吗?", "0"
"...", "...", "..." "...", "...", "..."


Loading…
Cancel
Save