标签归档:python

自然语言处理工具包spaCy介绍

spaCy 是一个Python自然语言处理工具包,诞生于2014年年中,号称“Industrial-Strength Natural Language Processing in Python”,是具有工业级强度的Python NLP工具包。spaCy里大量使用了 Cython 来提高相关模块的性能,这个区别于学术性质更浓的Python NLTK,因此具有了业界应用的实际价值。

安装和编译 spaCy 比较方便,在ubuntu环境下,直接用pip安装即可:

sudo apt-get install build-essential python-dev git
sudo pip install -U spacy

不过安装完毕之后,需要下载相关的模型数据,以英文模型数据为例,可以用"all"参数下载所有的数据:

sudo python -m spacy.en.download all

或者可以分别下载相关的模型和用glove训练好的词向量数据:

# 这个过程下载英文tokenizer,词性标注,句法分析,命名实体识别相关的模型
python -m spacy.en.download parser

# 这个过程下载glove训练好的词向量数据
python -m spacy.en.download glove

下载好的数据放在spacy安装目录下的data里,以我的ubuntu为例:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data$ du -sh *
776M    en-1.1.0
774M    en_glove_cc_300_1m_vectors-1.0.0

进入到英文数据模型下:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data/en-1.1.0$ du -sh *
424M    deps
8.0K    meta.json
35M ner
12M pos
84K tokenizer
300M    vocab
6.3M    wordnet

可以用如下命令检查模型数据是否安装成功:

textminer@textminer:~$ python -c "import spacy; spacy.load('en'); print('OK')"
OK

也可以用pytest进行测试:

# 首先找到spacy的安装路径:
python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
/usr/local/lib/python2.7/dist-packages/spacy

# 再安装pytest:
sudo python -m pip install -U pytest

# 最后进行测试:
python -m pytest /usr/local/lib/python2.7/dist-packages/spacy --vectors --model --slow
============================= test session starts ==============================
platform linux2 -- Python 2.7.12, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: /usr/local/lib/python2.7/dist-packages/spacy, inifile:
collected 318 items

../../usr/local/lib/python2.7/dist-packages/spacy/tests/test_matcher.py ........
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_entity_id.py ....
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_matcher_bugfixes.py .....
......
../../usr/local/lib/python2.7/dist-packages/spacy/tests/vocab/test_vocab.py .......Xx
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_api.py x...............
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_home.py ............

============== 310 passed, 5 xfailed, 3 xpassed in 53.95 seconds ===============

现在可以快速测试一下spaCy的相关功能,我们以英文数据为例,spaCy目前主要支持英文和德文,对其他语言的支持正在陆续加入:

textminer@textminer:~$ ipython
Python 2.7.12 (default, Jul  1 2016, 15:12:24)
Type "copyright", "credits" or "license" for more information.

IPython 2.4.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python'
s own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import spacy          

# 加载英文模型数据,稍许等待
In [2]: nlp = spacy.load('en')

Word tokenize功能,spaCy 1.2版本加了中文tokenize接口,基于Jieba中文分词:

In [3]: test_doc = nlp(u"it's word tokenize test for spacy")            

In [4]: print(test_doc)
it's word tokenize test for spacy

In [5]: for token in test_doc:                                          
    print(token)
   ...:    
it
'
s
word
tokenize
test
for
spacy

英文断句:

In [6]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')

In [7]: for sent in test_doc.sents:
    print(sent)
   ...:    
Natural language processing (NLP) deals with the application of computational models to text or speech data.
Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways.
NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form.
From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.


词干化(Lemmatize):

In [8]: test_doc = nlp(u"you are best. it is lemmatize test for spacy. I love these books")

In [9]: for token in test_doc:                                                      
    print(token, token.lemma_, token.lemma)
   ...:    
(you, u'you', 472)
(are, u'be', 488)
(best, u'good', 556)
(., u'.', 419)
(it, u'it', 473)
(is, u'be', 488)
(lemmatize, u'lemmatize', 1510296)
(test, u'test', 1351)
(for, u'for', 480)
(spacy, u'spacy', 173783)
(., u'.', 419)
(I, u'i', 570)
(love, u'love', 644)
(these, u'these', 642)
(books, u'book', 1011)

词性标注(POS Tagging):

In [10]: for token in test_doc:                                                    
    print(token, token.pos_, token.pos)
   ....:    
(you, u'PRON', 92)
(are, u'VERB', 97)
(best, u'ADJ', 82)
(., u'PUNCT', 94)
(it, u'PRON', 92)
(is, u'VERB', 97)
(lemmatize, u'ADJ', 82)
(test, u'NOUN', 89)
(for, u'ADP', 83)
(spacy, u'NOUN', 89)
(., u'PUNCT', 94)
(I, u'PRON', 92)
(love, u'VERB', 97)
(these, u'DET', 87)
(books, u'NOUN', 89)

命名实体识别(NER):

In [11]: test_doc = nlp(u"Rami Eid is studying at Stony Brook University in New York")

In [12]: for ent in test_doc.ents:
    print(ent, ent.label_, ent.label)
   ....:    
(Rami Eid, u'PERSON', 346)
(Stony Brook University, u'ORG', 349)
(New York, u'GPE', 350)

名词短语提取:

In [13]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')


In [14]: for np in test_doc.noun_chunks:
    print(np)
   ....:    
Natural language processing
Natural language processing (NLP) deals
the application
computational models
text
speech
data
Application areas
NLP
automatic (machine) translation
languages
dialogue systems
a human
a machine
natural language
information extraction
the goal
unstructured text
structured (database) representations
flexible ways
NLP technologies
a dramatic impact
the way
people
computers
the way
people
the use
language
the way
people
the vast amount
linguistic data
electronic form
a scientific viewpoint
NLP
fundamental questions
formal models
example
natural language phenomena
algorithms
these models

基于词向量计算两个单词的相似度:

In [15]: test_doc = nlp(u"Apples and oranges are similar. Boots and hippos aren't.")

In [16]: apples = test_doc[0]

In [17]: print(apples)
Apples

In [18]: oranges = test_doc[2]

In [19]: print(oranges)
oranges

In [20]: boots = test_doc[6]

In [21]: print(boots)
Boots

In [22]: hippos = test_doc[8]

In [23]: print(hippos)
hippos

In [24]: apples.similarity(oranges)
Out[24]: 0.77809414836023805

In [25]: boots.similarity(hippos)
Out[25]: 0.038474555379008429

当然,spaCy还包括句法分析的相关功能等。另外值得关注的是 spaCy 从1.0版本起,加入了对深度学习工具的支持,例如 Tensorflow 和 Keras 等,这方面具体可以参考官方文档给出的一个对情感分析(Sentiment Analysis)模型进行分析的例子:Hooking a deep learning model into spaCy.

参考:
spaCy官方文档
Getting Started with spaCy

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:自然语言处理工具包spaCy介绍 http://www.52nlp.cn/?p=9386

反向传播算法入门资源索引

1、一切从维基百科开始,大致了解一个全貌:
反向传播算法 Backpropagation

2、拿起纸和笔,再加上ipython or 计算器,通过一个例子直观感受反向传播算法:
A Step by Step Backpropagation Example

3、再玩一下上篇例子对应的200多行Python代码: Neural Network with Backpropagation

4、有了上述直观的反向传播算法体验,可以从1986年这篇经典的论文入手了:Learning representations by back-propagating errors

5、如果还是觉得晦涩,推荐读一下"Neural Networks and Deep Learning"这本深度学习在线书籍的第二章:How the backpropagation algorithm works

6、或者可以通过油管看一下这个神经网络教程的前几节关于反向传播算法的视频: Neural Network Tutorial

7、hankcs 同学对于上述视频和相关材料有一个解读: 反向传播神经网络极简入门

8、这里还有一个比较简洁的数学推导:Derivation of Backpropagation

9、神牛gogo 同学对反向传播算法原理及代码解读:神经网络反向传播的数学原理

10、关于反向传播算法,更本质一个解释:自动微分反向模式(Reverse-mode differentiation )Calculus on Computational Graphs: Backpropagation

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:反向传播算法入门资源索引 http://www.52nlp.cn/?p=9350

深度学习主机环境配置: Ubuntu16.04+GeForce GTX 1080+TensorFlow

接上文《深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0》,我们继续来安装 TensorFlow,使其支持GeForce GTX 1080显卡。

1 下载和安装cuDNN

cuDNN全称 CUDA Deep Neural Network library,是NVIDIA专门针对深度神经网络设计的一套GPU计算加速库,被广泛用于各种深度学习框架,例如Caffe, TensorFlow, Theano, Torch, CNTK等。

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.

Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks, including Caffe, TensorFlow, Theano, Torch, and CNTK. See supported frameworks for more details.

首先需要下载cuDNN,直接从Nvidia官方下载链接选择一个版本,不过下载cuDNN前同样需要登录甚至填写一个简单的调查问卷: https://developer.nvidia.com/rdp/cudnn-download,这里选择的是支持CUDA8.0的cuDNN v5版本,而支持CUDA8的5.1版本虽然显示在下载选择项里,但是提示:cuDNN 5.1 RC for CUDA 8RC will be available soon - please check back again.

屏幕快照 2016-07-17 上午11.17.39

安装cuDNN比较简单,解压后把相应的文件拷贝到对应的CUDA目录下即可:

tar -zxvf cudnn-8.0-linux-x64-v5.0-ga.tgz

cuda/include/cudnn.h
cuda/lib64/libcudnn.so
cuda/lib64/libcudnn.so.5
cuda/lib64/libcudnn.so.5.0.5
cuda/lib64/libcudnn_static.a

sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/
sudo chmod a+r /usr/local/cuda/include/cudnn.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

继续阅读

斯坦福大学深度学习与自然语言处理第一讲:引言

斯坦福大学在三月份开设了一门“深度学习与自然语言处理”的课程:CS224d: Deep Learning for Natural Language Processing,授课老师是青年才俊 Richard Socher,他本人是德国人,大学期间涉足自然语言处理,在德国读研时又专攻计算机视觉,之后在斯坦福大学攻读博士学位,拜师NLP领域的巨牛 Chris ManningDeep Learning 领域的巨牛 Andrew Ng,其博士论文是《Recursive Deep Learning for Natural Language Processing and Computer Vision》,也算是多年求学生涯的完美一击。毕业后以联合创始人及CTO的身份创办了MetaMind,作为AI领域的新星创业公司,MetaMind创办之初就拿了800万美元的风投,值得关注。

回到这们课程CS224d,其实可以翻译为“面向自然语言处理的深度学习(Deep Learning for Natural Language Processing)”,这门课程是面向斯坦福学生的校内课程,不过课程的相关材料都放到了网上,包括课程视频,课件,相关知识,预备知识,作业等等,相当齐备。课程大纲相当有章法和深度,从基础讲起,再讲到深度学习在NLP领域的具体应用,包括命名实体识别,机器翻译,句法分析器,情感分析等。Richard Socher此前在ACL 2012和NAACL 2013 做过一个Tutorial,Deep Learning for NLP (without Magic),感兴趣的同学可以先参考一下: Deep Learning for NLP (without Magic) - ACL 2012 Tutorial - 相关视频及课件 。另外,由于这门课程的视频放在Youtube上,@爱可可-爱生活 老师维护了一个网盘链接:http://pan.baidu.com/s/1pJyrXaF ,同步更新相关资料,可以关注。
继续阅读

Python 网页爬虫 & 文本处理 & 科学计算 & 机器学习 & 数据挖掘兵器谱

曾经因为NLTK的缘故开始学习Python,之后渐渐成为我工作中的第一辅助脚本语言,虽然开发语言是C/C++,但平时的很多文本数据处理任务都交给了Python。离开腾讯创业后,第一个作品课程图谱也是选择了Python系的Flask框架,渐渐的将自己的绝大部分工作交给了Python。这些年来,接触和使用了很多Python工具包,特别是在文本处理,科学计算,机器学习和数据挖掘领域,有很多很多优秀的Python工具包可供使用,所以作为Pythoner,也是相当幸福的。其实如果仔细留意微博,你会发现很多这方面的分享,自己也Google了一下,发现也有同学总结了“Python机器学习库”,不过总感觉缺少点什么。最近流行一个词,全栈工程师(full stack engineer),作为一个苦逼的创业者,天然的要把自己打造成一个full stack engineer,而这个过程中,这些Python工具包给自己提供了足够的火力,所以想起了这个系列。当然,这也仅仅是抛砖引玉,希望大家能提供更多的线索,来汇总整理一套Python网页爬虫,文本处理,科学计算,机器学习和数据挖掘的兵器谱。

一、Python网页爬虫工具集

一个真实的项目,一定是从获取数据开始的。无论文本处理,机器学习和数据挖掘,都需要数据,除了通过一些渠道购买或者下载的专业数据外,常常需要大家自己动手爬数据,这个时候,爬虫就显得格外重要了,幸好,Python提供了一批很不错的网页爬虫工具框架,既能爬取数据,也能获取和清洗数据,我们也就从这里开始了:

1. Scrapy

Scrapy, a fast high-level screen scraping and web crawling framework for Python.

鼎鼎大名的Scrapy,相信不少同学都有耳闻,课程图谱中的很多课程都是依靠Scrapy抓去的,这方面的介绍文章有很多,推荐大牛pluskid早年的一篇文章:《Scrapy 轻松定制网络爬虫》,历久弥新。

官方主页:http://scrapy.org/
Github代码页: https://github.com/scrapy/scrapy

2. Beautiful Soup

You didn't write that awful page. You're just trying to get some data out of it. Beautiful Soup is here to help. Since 2004, it's been saving programmers hours or days of work on quick-turnaround screen scraping projects.

读书的时候通过《集体智慧编程》这本书知道Beautiful Soup的,后来也偶尔会用用,非常棒的一套工具。客观的说,Beautifu Soup不完全是一套爬虫工具,需要配合urllib使用,而是一套HTML/XML数据分析,清洗和获取工具。

官方主页:http://www.crummy.com/software/BeautifulSoup/

3. Python-Goose

Html Content / Article Extractor, web scrapping lib in Python

Goose最早是用Java写得,后来用Scala重写,是一个Scala项目。Python-Goose用Python重写,依赖了Beautiful Soup。前段时间用过,感觉很不错,给定一个文章的URL, 获取文章的标题和内容很方便。

Github主页:https://github.com/grangier/python-goose

二、Python文本处理工具集

从网页上获取文本数据之后,依据任务的不同,就需要进行基本的文本处理了,譬如对于英文来说,需要基本的tokenize,对于中文,则需要常见的中文分词,进一步的话,无论英文中文,还可以词性标注,句法分析,关键词提取,文本分类,情感分析等等。这个方面,特别是面向英文领域,有很多优秀的工具包,我们一一道来。
继续阅读

如何计算两个文档的相似度(三)

上一节我们用了一个简单的例子过了一遍gensim的用法,这一节我们将用课程图谱的实际数据来做一些验证和改进,同时会用到NLTK来对课程的英文数据做预处理。

三、课程图谱相关实验

1、数据准备
为了方便大家一起来做验证,这里准备了一份Coursera的课程数据,可以在这里下载:coursera_corpus,(百度网盘链接: http://t.cn/RhjgPkv,密码: oppc)总共379个课程,每行包括3部分内容:课程名\t课程简介\t课程详情, 已经清除了其中的html tag, 下面所示的例子仅仅是其中的课程名:

Writing II: Rhetorical Composing
Genetics and Society: A Course for Educators
General Game Playing
Genes and the Human Condition (From Behavior to Biotechnology)
A Brief History of Humankind
New Models of Business in Society
Analyse Numérique pour Ingénieurs
Evolution: A Course for Educators
Coding the Matrix: Linear Algebra through Computer Science Applications
The Dynamic Earth: A Course for Educators
...

好了,首先让我们打开Python, 加载这份数据:

>>> courses = [line.strip() for line in file('coursera_corpus')]
>>> courses_name = [course.split('\t')[0] for course in courses]
>>> print courses_name[0:10]
['Writing II: Rhetorical Composing', 'Genetics and Society: A Course for Educators', 'General Game Playing', 'Genes and the Human Condition (From Behavior to Biotechnology)', 'A Brief History of Humankind', 'New Models of Business in Society', 'Analyse Num\xc3\xa9rique pour Ing\xc3\xa9nieurs', 'Evolution: A Course for Educators', 'Coding the Matrix: Linear Algebra through Computer Science Applications', 'The Dynamic Earth: A Course for Educators']

2、引入NLTK
NTLK是著名的Python自然语言处理工具包,但是主要针对的是英文处理,不过课程图谱目前处理的课程数据主要是英文,因此也足够了。NLTK配套有文档,有语料库,有书籍,甚至国内有同学无私的翻译了这本书: 用Python进行自然语言处理,有时候不得不感慨:做英文自然语言处理的同学真幸福。

首先仍然是安装NLTK,在NLTK的主页详细介绍了如何在Mac, Linux和Windows下安装NLTK:http://nltk.org/install.html ,最主要的还是要先装好依赖NumPy和PyYAML,其他没什么问题。安装NLTK完毕,可以import nltk测试一下,如果没有问题,还有一件非常重要的工作要做,下载NLTK官方提供的相关语料:

>>> import nltk
>>> nltk.download()

这个时候会弹出一个图形界面,会显示两份数据供你下载,分别是all-corpora和book,最好都选定下载了,这个过程需要一段时间,语料下载完毕后,NLTK在你的电脑上才真正达到可用的状态,可以测试一下布朗语料库

>>> from nltk.corpus import brown
>>> brown.readme()
'BROWN CORPUS\n\nA Standard Corpus of Present-Day Edited American\nEnglish, for use with Digital Computers.\n\nby W. N. Francis and H. Kucera (1964)\nDepartment of Linguistics, Brown University\nProvidence, Rhode Island, USA\n\nRevised 1971, Revised and Amplified 1979\n\nhttp://www.hit.uib.no/icame/brown/bcm.html\n\nDistributed with the permission of the copyright holder,\nredistribution permitted.\n'
>>> brown.words()[0:10]
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of']
>>> brown.tagged_words()[0:10]
[('The', 'AT'), ('Fulton', 'NP-TL'), ('County', 'NN-TL'), ('Grand', 'JJ-TL'), ('Jury', 'NN-TL'), ('said', 'VBD'), ('Friday', 'NR'), ('an', 'AT'), ('investigation', 'NN'), ('of', 'IN')]
>>> len(brown.words())
1161192

现在我们就来处理刚才的课程数据,如果按此前的方法仅仅对文档的单词小写化的话,我们将得到如下的结果:

>>> texts_lower = [[word for word in document.lower().split()] for document in courses]
>>> print texts_lower[0]
['writing', 'ii:', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading,', 'research,', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic,', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words,', 'ideas,', 'talents,', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

注意其中很多标点符号和单词是没有分离的,所以我们引入nltk的word_tokenize函数,并处理相应的数据:

>>> from nltk.tokenize import word_tokenize
>>> texts_tokenized = [[word.lower() for word in word_tokenize(document.decode('utf-8'))] for document in courses]
>>> print texts_tokenized[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading', ',', 'research', ',', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic', ',', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers', '...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

对课程的英文数据进行tokenize之后,我们需要去停用词,幸好NLTK提供了一份英文停用词数据:

>>> from nltk.corpus import stopwords
>>> english_stopwords = stopwords.words('english')
>>> print english_stopwords
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now']
>>> len(english_stopwords)
127

总计127个停用词,我们首先过滤课程语料中的停用词:
>>> texts_filtered_stopwords = [[word for word in document if not word in english_stopwords] for document in texts_tokenized]
>>> print texts_filtered_stopwords[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', ',', 'research', ',', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', ',', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'support.', 'introduced', 'variety', 'rhetorical', 'concepts\xe2\x80\x94that', ',', 'ideas', 'techniques', 'inform', 'persuade', 'audiences\xe2\x80\x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', ',', 'visual', ',', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', ',', 'demonstrations', ',', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', ',', 'rhetoric', 'course', 'design', ',', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', ',', 'students', ',', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', ',', 'writers', 'exchange', ',', 'place', 'exchange', 'work', 'feedback']

停用词被过滤了,不过发现标点符号还在,这个好办,我们首先定义一个标点符号list:
>>> english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '&', '!', '*', '@', '#', '$', '%']

然后过滤这些标点符号:
>>> texts_filtered = [[word for word in document if not word in english_punctuations] for document in texts_filtered_stopwords]
>>> print texts_filtered[0]
['writing', 'ii', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', 'research', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', 'ideas', 'talents', 'support.', 'introduced', 'variety', 'rhetorical', 'concepts\xe2\x80\x94that', 'ideas', 'techniques', 'inform', 'persuade', 'audiences\xe2\x80\x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', 'visual', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', 'demonstrations', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', 'rhetoric', 'course', 'design', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', 'students', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', 'writers', 'exchange', 'place', 'exchange', 'work', 'feedback']

更进一步,我们对这些英文单词词干化(Stemming),NLTK提供了好几个相关工具接口可供选择,具体参考这个页面: http://nltk.org/api/nltk.stem.html , 可选的工具包括Lancaster Stemmer, Porter Stemmer等知名的英文Stemmer。这里我们使用LancasterStemmer:

>>> from nltk.stem.lancaster import LancasterStemmer
>>> st = LancasterStemmer()
>>> st.stem('stemmed')
'stem'
>>> st.stem('stemming')
'stem'
>>> st.stem('stemmer')
'stem'
>>> st.stem('running')
'run'
>>> st.stem('maximum')
'maxim'
>>> st.stem('presumably')
'presum'

让我们调用这个接口来处理上面的课程数据:
>>> texts_stemmed = [[st.stem(word) for word in docment] for docment in texts_filtered]
>>> print texts_stemmed[0]
['writ', 'ii', 'rhet', 'compos', 'rhet', 'compos', 'eng', 'sery', 'interact', 'read', 'research', 'compos', 'act', 'along', 'assign', 'design', 'help', 'becom', 'effect', 'consum', 'produc', 'alphabet', 'vis', 'multimod', 'texts.', 'join', 'us', 'becom', 'effect', 'writ', '...', 'bet', 'citizens.', 'rhet', 'compos', 'cours', 'writ', 'exchang', 'word', 'idea', 'tal', 'support.', 'introduc', 'vary', 'rhet', 'concepts\xe2\x80\x94that', 'idea', 'techn', 'inform', 'persuad', 'audiences\xe2\x80\x94that', 'help', 'becom', 'effect', 'consum', 'produc', 'writ', 'vis', 'multimod', 'texts.', 'class', 'includ', 'short', 'video', 'demonst', 'activities.', 'envid', 'rhet', 'compos', 'learn', 'commun', 'includ', 'enrol', 'cours', 'instructors.', 'bring', 'expert', 'writ', 'rhet', 'cours', 'design', 'design', 'assign', 'cours', 'infrastruct', 'help', 'shar', 'expery', 'writ', 'stud', 'profess', 'us.', 'collab', 'facilit', 'wex', 'writ', 'exchang', 'plac', 'exchang', 'work', 'feedback']

在我们引入gensim之前,还有一件事要做,去掉在整个语料库中出现次数为1的低频词,测试了一下,不去掉的话对效果有些影响:

>>> all_stems = sum(texts_stemmed, [])
>>> stems_once = set(stem for stem in set(all_stems) if all_stems.count(stem) == 1)
>>> texts = [[stem for stem in text if stem not in stems_once] for text in texts_stemmed]

3、引入gensim
有了上述的预处理,我们就可以引入gensim,并快速的做课程相似度的实验了。以下会快速的过一遍流程,具体的可以参考上一节的详细描述。

>>> from gensim import corpora, models, similarities
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

>>> dictionary = corpora.Dictionary(texts)
2013-06-07 21:37:07,120 : INFO : adding document #0 to Dictionary(0 unique tokens)
2013-06-07 21:37:07,263 : INFO : built Dictionary(3341 unique tokens) from 379 documents (total 46417 corpus positions)

>>> corpus = [dictionary.doc2bow(text) for text in texts]

>>> tfidf = models.TfidfModel(corpus)
2013-06-07 21:58:30,490 : INFO : collecting document frequencies
2013-06-07 21:58:30,490 : INFO : PROGRESS: processing document #0
2013-06-07 21:58:30,504 : INFO : calculating IDF weights for 379 documents and 3341 features (29166 matrix non-zeros)

>>> corpus_tfidf = tfidf[corpus]

这里我们拍脑门决定训练topic数量为10的LSI模型:
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=10)

>>> index = similarities.MatrixSimilarity(lsi[corpus])
2013-06-07 22:04:55,443 : INFO : scanning corpus to determine the number of features
2013-06-07 22:04:55,510 : INFO : creating matrix for 379 documents and 10 features

基于LSI模型的课程索引建立完毕,我们以Andrew Ng教授的机器学习公开课为例,这门课程在我们的coursera_corpus文件的第211行,也就是:

>>> print courses_name[210]
Machine Learning

现在我们就可以通过lsi模型将这门课程映射到10个topic主题模型空间上,然后和其他课程计算相似度:
>>> ml_course = texts[210]
>>> ml_bow = dicionary.doc2bow(ml_course)
>>> ml_lsi = lsi[ml_bow]
>>> print ml_lsi
[(0, 8.3270084238788673), (1, 0.91295652151975082), (2, -0.28296075112669405), (3, 0.0011599008827843801), (4, -4.1820134980024255), (5, -0.37889856481054851), (6, 2.0446999575052125), (7, 2.3297944485200031), (8, -0.32875594265388536), (9, -0.30389668455507612)]
>>> sims = index[ml_lsi]
>>> sort_sims = sorted(enumerate(sims), key=lambda item: -item[1])

取按相似度排序的前10门课程:
>>> print sort_sims[0:10]
[(210, 1.0), (174, 0.97812241), (238, 0.96428639), (203, 0.96283489), (63, 0.9605484), (189, 0.95390636), (141, 0.94975704), (184, 0.94269753), (111, 0.93654782), (236, 0.93601125)]

第一门课程是它自己:
>>> print courses_name[210]
Machine Learning

第二门课是Coursera上另一位大牛Pedro Domingos机器学习公开课
>>> print courses_name[174]
Machine Learning

第三门课是Coursera的另一位创始人,同样是大牛的Daphne Koller教授的概率图模型公开课
>>> print courses_name[238]
Probabilistic Graphical Models

第四门课是另一位超级大牛Geoffrey Hinton的神经网络公开课,有同学评价是Deep Learning的必修课。
>>> print courses_name[203]
Neural Networks for Machine Learning

感觉效果还不错,如果觉得有趣的话,也可以动手试试。

好了,这个系列就到此为止了,原计划写一下在英文维基百科全量数据上的实验,因为课程图谱目前暂时不需要,所以就到此为止,感兴趣的同学可以直接阅读gensim上的相关文档,非常详细。之后我可能更关注将NLTK应用到中文信息处理上,欢迎关注。

注:原创文章,转载请注明出处“我爱自然语言处理”:www.52nlp.cn

本文链接地址:http://www.52nlp.cn/如何计算两个文档的相似度三

推荐《用Python进行自然语言处理》中文翻译-NLTK配套书

  NLTK配套书《用Python进行自然语言处理》(Natural Language Processing with Python)已经出版好几年了,但是国内一直没有翻译的中文版,虽然读英文原版是最好的选择,但是对于多数读者,如果有中文版,一定是不错的。下午在微博上看到陈涛sean 同学提供了NLTK配套书的中译本下载,就追问了一下,之后译者和我私信联系,并交流了一下,才发现是作者无偿翻译的,并且没有出版计划的。翻译是个很苦的差事,向译者致敬,另外译者说里面有一些错误,希望能得到nlper们的指正,大家一起来修正这个珍贵的NLTK中文版吧。另外译者希望在“52nlp”上做个推荐,这事是造福nlper的好事,我已经在“资源”里更新了本书的链接,以下是书的下载地址:

PYTHON自然语言处理中文翻译-NLTK Natural Language Processing with Python 中文版

  翻看了一下翻译版,且不说翻译质量,单看排版就让人觉得向一本正式的翻译书籍,说明译者是非常有心的。以下是从翻译版中摘录的“译者的话”:

  作为一个自然语言处理的初学者,看书看到“训练模型”,这模型那模型的,一直不知
道模型究竟是什么东西。看了这本书,从预处理数据到提取特征集,训练模型,测试修改等,一步一步实际操作了之后,才对模型一词有了直观的认识(算法的中间结果,存储在计算机中的一个个pkl 文件,测试的时候直接用,前面计算过的就省了)。以后听人谈“模型”的时候也有了底气。当然,模型还有很多其他含义。还有动词的“配价”、各种搭配、客观逻辑对根据文法生成的句子的约束如何实现?不上机动手做做,很难真正领悟。

  自然语言处理理论书籍很多,讲实际操作的不多,能讲的这么系统的更少。从这个角度
讲,本书是目前世界上最好的自然语言处理实践教程。初学者若在看过理论之后能精读本书,必定会有获益。这也是翻译本书的目的之一。

  本书是译者课余英文翻译练习,抛砖引玉。书中存在很多问题,尤其是第10 章命题逻
辑和一阶逻辑推理在自然语言处理中的应用。希望大家多多指教。可以在微博上找到我(w
eibo.com/chentao1999)。虽然读中文翻译速度更快,但直接读原文更能了解作者的本意。

  原书作者在书的最后列出了迫切需要帮助改进的条目,对翻译本书建议使用目标语言的
例子,目前本书还只能照搬英文的例子,希望有志愿者能加入本书的中文化进程中,为中文
自然语言处理做出贡献。

  将本书作学习和研究之用,欢迎传播、复制、修改。山寨产品请留下译者姓名和微博。
用于商业目的,请与原书版权所有者联系,译者不承担由此产生的责任。

翻译:陈涛(weibo.com/chentao1999)

2012 年4 月7 日

   最后希望大家在读这本书的过程中,记录一下需要勘误的地方,可以在“评论”中给出勘误建议,一起来修正这本书。谢谢!

Beautiful Data-统计语言模型的应用三:分词8

  对于一个包含n个字符的单词来说,利用语言模型进行分词的前提是首先枚举出所有的候选切分,而segment函数中:
  candidates = ( [first] + segment( rem ) for first, rem in splits( text ) )
的作用正是如此,它包含了递归调用,因此能枚举出所有的候选切分。那么,这个函数的时间复杂度是多少呢?一个包含n个字符的字符串有2^(n-1)种不同的分词方案(在字符之间有n-1个位置,每一个位置既可以作为单词边界也可以不作为边界),因此segment函数的时间复杂度为O(2^n),难怪之前的测试当字符串比较长时就跑不出结果了! 继续阅读

Google’s Python Class SOS 续 --下载

  这是”Google’s Python Class SOS“的延续,但是首先得感谢“一盆仙人球”和“wibe"两位热心读者——”仙人球“兄提供视频,而wibe则提供emule共享——以下是Google's Python Class视频的emule下载地址,请注意wibe工作日8点至下午5点在线,目前只有这一个种子,请读者下载后尽量提供给其他人继续下载,这样可以持续维持这个课程的下载了: 继续阅读