标签归档:自然语言处理

Coursera上Python课程(公开课)汇总推荐:从Python入门到应用Python

Deep Learning Specialization on Coursera

Python是深度学习时代的语言,Coursera上有很多Python课程,从Python入门到精通,从Python基础语法到应用Python,满足各个层次的需求,以下是Coursera上的Python课程整理,仅供参考,这里也会持续更新。

1. 密歇根大学的“Python for Everybody Specialization(人人都可以学习的Python专项课程)”

这个系列对于学习者的编程背景和数学要求几乎为零,非常适合Python入门学习。这个系列也是Coursera上最受欢迎的Python学习系列课程,强烈推荐。这个Python系列的目标是“通过Python学习编程并分析数据,开发用于采集,清洗,分析和可视化数据的程序(Learn to Program and Analyze Data with Python-Develop programs to gather, clean, analyze, and visualize data.” ,以下是关于这个系列的简介:

This Specialization builds on the success of the Python for Everybody course and will introduce fundamental programming concepts including data structures, networked application program interfaces, and databases, using the Python programming language. In the Capstone Project, you’ll use the technologies learned throughout the Specialization to design and create your own applications for data retrieval, processing, and visualization.

这个系列包含4门子课程和1门毕业项目课程,包括Python入门基础,Python数据结构, 使用Python获取网络数据(Python爬虫),在Python中使用数据库以及Python数据可视化等。以下是具体子课程的介绍:

1.1 Programming for Everybody (Getting Started with Python)

Python入门级课程,这门课程暂且翻译为“人人都可以学编程-从Python开始”,如果没有任何编程基础,就从这门课程开始吧:

This course aims to teach everyone the basics of programming computers using Python. We cover the basics of how one constructs a program from a series of simple instructions in Python. The course has no pre-requisites and avoids all but the simplest mathematics. Anyone with moderate computer experience should be able to master the materials in this course. This course will cover Chapters 1-5 of the textbook “Python for Everybody”. Once a student completes this course, they will be ready to take more advanced programming courses. This course covers Python 3.

1.2 Python Data Structures(Python数据结构)

Python基础课程,这门课程的目标是介绍Python语言的核心数据结构(This course will introduce the core data structures of the Python programming language.),关于这门课程:

This course will introduce the core data structures of the Python programming language. We will move past the basics of procedural programming and explore how we can use the Python built-in data structures such as lists, dictionaries, and tuples to perform increasingly complex data analysis. This course will cover Chapters 6-10 of the textbook “Python for Everybody”. This course covers Python 3.

1.3 Using Python to Access Web Data(使用Python获取网页数据--Python爬虫)

Python应用课程,只有使用Python才能学以致用,这门课程的目标是展示如何通过爬取和分析网页数据将互联网作为数据的源泉(This course will show how one can treat the Internet as a source of data):

This course will show how one can treat the Internet as a source of data. We will scrape, parse, and read web data as well as access data using web APIs. We will work with HTML, XML, and JSON data formats in Python. This course will cover Chapters 11-13 of the textbook “Python for Everybody”. To succeed in this course, you should be familiar with the material covered in Chapters 1-10 of the textbook and the first two courses in this specialization. These topics include variables and expressions, conditional execution (loops, branching, and try/except), functions, Python data structures (strings, lists, dictionaries, and tuples), and manipulating files. This course covers Python 3.

1.4 Using Databases with Python(Python数据库)

Python应用课程,在Python中使用数据库。这门课程的目标是在Python中学习SQL,使用SQLite3作为抓取数据的存储数据库:

This course will introduce students to the basics of the Structured Query Language (SQL) as well as basic database design for storing data as part of a multi-step data gathering, analysis, and processing effort. The course will use SQLite3 as its database. We will also build web crawlers and multi-step data gathering and visualization processes. We will use the D3.js library to do basic data visualization. This course will cover Chapters 14-15 of the book “Python for Everybody”. To succeed in this course, you should be familiar with the material covered in Chapters 1-13 of the textbook and the first three courses in this specialization. This course covers Python 3.

1.5 Capstone: Retrieving, Processing, and Visualizing Data with Python(毕业项目课程:使用Python获取,处理和可视化数据)

Python应用实践课程,这是这个系列的毕业项目课程,目的是通过开发一系列Python应用项目让学生熟悉Python抓取,处理和可视化数据的流程。

In the capstone, students will build a series of applications to retrieve, process and visualize data using Python. The projects will involve all the elements of the specialization. In the first part of the capstone, students will do some visualizations to become familiar with the technologies in use and then will pursue their own project to visualize some other data that they have or can find. Chapters 15 and 16 from the book “Python for Everybody” will serve as the backbone for the capstone. This course covers Python 3.

2. 多伦多大学的编程入门课程"Learn to Program: The Fundamentals(学习编程:基础) "

Python入门级课程。这门课程以Python语言传授编程入门知识,实为零基础的Python入门课程。感兴趣的同学可以参考课程图谱上的老课程评论 :http://coursegraph.com/coursera_programming1 ,之前一个同学的评价是 “两个老师语速都偏慢,讲解细致,又有可视化工具Python Visualizer用于详细了解程序具体执行步骤,可以说是零基础学习python编程的最佳选择。”

Behind every mouse click and touch-screen tap, there is a computer program that makes things happen. This course introduces the fundamental building blocks of programming and teaches you how to write fun and useful programs using the Python language.

3. 莱斯大学的Python专项课程系列:Introduction to Scripting in Python Specialization

入门级Python学习系列课程,涵盖Python基础, Python数据表示, Python数据分析, Python数据可视化等子课程,比较适合Python入门。这门课程的目标是让学生可以在处理实际问题是使用Python解决问题:Launch Your Career in Python Programming-Master the core concepts of scripting in Python to enable you to solve practical problems.

This specialization is intended for beginners who would like to master essential programming skills. Through four courses, you will cover key programming concepts in Python 3 which will prepare you to use Python to perform common scripting tasks. This knowledge will provide a solid foundation towards a career in data science, software engineering, or other disciplines involving programming.

这个系列包含4门子课程,以下是具体子课程的介绍:

3.1 Python Programming Essentials(Python编程基础)

Python入门基础课程,这门课程将讲授Python编程基础知识,包括表达式,变量,函数等,目标是让用户熟练使用Python:

This course will introduce you to the wonderful world of Python programming! We'll learn about the essential elements of programming and how to construct basic Python programs. We will cover expressions, variables, functions, logic, and conditionals, which are foundational concepts in computer programming. We will also teach you how to use Python modules, which enable you to benefit from the vast array of functionality that is already a part of the Python language. These concepts and skills will help you to begin to think like a computer programmer and to understand how to go about writing Python programs. By the end of the course, you will be able to write short Python programs that are able to accomplish real, practical tasks. This course is the foundation for building expertise in Python programming. As the first course in a specialization, it provides the necessary building blocks for you to succeed at learning to write more complex Python programs. This course uses Python 3. While many Python programs continue to use Python 2, Python 3 is the future of the Python programming language. This first course will use a Python 3 version of the CodeSkulptor development environment, which is specifically designed to help beginning programmers learn quickly. CodeSkulptor runs within any modern web browser and does not require you to install any software, allowing you to start writing and running small programs immediately. In the later courses in this specialization, we will help you to move to more sophisticated desktop development environments.

3.2 Python Data Representations(Python数据表示)

Python入门基础课程,这门课程依然关注Python的基础知识,包括Python字符串,列表等,以及Python文件操作:

This course will continue the introduction to Python programming that started with Python Programming Essentials. We'll learn about different data representations, including strings, lists, and tuples, that form the core of all Python programs. We will also teach you how to access files, which will allow you to store and retrieve data within your programs. These concepts and skills will help you to manipulate data and write more complex Python programs. By the end of the course, you will be able to write Python programs that can manipulate data stored in files. This will extend your Python programming expertise, enabling you to write a wide range of scripts using Python This course uses Python 3. While most Python programs continue to use Python 2, Python 3 is the future of the Python programming language. This course introduces basic desktop Python development environments, allowing you to run Python programs directly on your computer. This choice enables a smooth transition from online development environments.

3.3 Python Data Analysis(Python数据分析)

Python基础课程,这门课程将讲授通过Python读取和分析表格数据和结构化数据等,例如TCSV文件等:

This course will continue the introduction to Python programming that started with Python Programming Essentials and Python Data Representations. We'll learn about reading, storing, and processing tabular data, which are common tasks. We will also teach you about CSV files and Python's support for reading and writing them. CSV files are a generic, plain text file format that allows you to exchange tabular data between different programs. These concepts and skills will help you to further extend your Python programming knowledge and allow you to process more complex data. By the end of the course, you will be comfortable working with tabular data in Python. This will extend your Python programming expertise, enabling you to write a wider range of scripts using Python. This course uses Python 3. While most Python programs continue to use Python 2, Python 3 is the future of the Python programming language. This course uses basic desktop Python development environments, allowing you to run Python programs directly on your computer.

3.4 Python Data Visualization(Python数据可视化)

Python应用课程,这门课程将基于前3门课程学习的Python知识,抓取网络数据,然后清洗,处理和分析数据,并最终可视化呈现数据:

This if the final course in the specialization which builds upon the knowledge learned in Python Programming Essentials, Python Data Representations, and Python Data Analysis. We will learn how to install external packages for use within Python, acquire data from sources on the Web, and then we will clean, process, analyze, and visualize that data. This course will combine the skills learned throughout the specialization to enable you to write interesting, practical, and useful programs. By the end of the course, you will be comfortable installing Python packages, analyzing existing data, and generating visualizations of that data. This course will complete your education as a scripter, enabling you to locate, install, and use Python packages written by others. You will be able to effectively utilize tools and packages that are widely available to amplify your effectiveness and write useful programs.

4. 莱斯大学的计算(机)基础专项课程系列:Fundamentals of Computing Specialization

入门级Python编程学习课程系列,这个系列覆盖了大部分莱斯大学一年级计算机科学新生的学习材料,学生通过Python学习现代编程语言技巧,并将这些技巧应用到20个左右的有趣的编程项目中。

This Specialization covers much of the material that first-year Computer Science students take at Rice University. Students learn sophisticated programming skills in Python from the ground up and apply these skills in building more than 20 fun projects. The Specialization concludes with a Capstone exam that allows the students to demonstrate the range of knowledge that they have acquired in the Specialization.

这个系列包括Python交互式编程设计,计算原理,算法思维等6门课程和1门毕业项目课程,目标是让学生像计算机科学家一样编程和思考(Learn how to program and think like a Computer Scientist),以下是子课程的相关介绍:

4.1 An Introduction to Interactive Programming in Python (Part 1)(Python交互式编程导论上)

Python入门级课程,这门课程将讲授Python编程基础知识,例如普通表达式,条件表达式和函数,并用这些知识构建一个简单的交互式应用。

This two-part course is designed to help students with very little or no computing background learn the basics of building simple interactive applications. Our language of choice, Python, is an easy-to learn, high-level computer language that is used in many of the computational courses offered on Coursera. To make learning Python easy, we have developed a new browser-based programming environment that makes developing interactive applications in Python simple. These applications will involve windows whose contents are graphical and respond to buttons, the keyboard and the mouse. In part 1 of this course, we will introduce the basic elements of programming (such as expressions, conditionals, and functions) and then use these elements to create simple interactive applications such as a digital stopwatch. Part 1 of this class will culminate in building a version of the classic arcade game "Pong".

4.2 An Introduction to Interactive Programming in Python (Part 2)(Python交互式编程导论下)

Python入门级课程,这门课程将继续讲授Python基础知识,例如列表,词典和循环,并将使用这些知识构建一个简单的游戏例如Blackjack:

This two-part course is designed to help students with very little or no computing background learn the basics of building simple interactive applications. Our language of choice, Python, is an easy-to learn, high-level computer language that is used in many of the computational courses offered on Coursera. To make learning Python easy, we have developed a new browser-based programming environment that makes developing interactive applications in Python simple. These applications will involve windows whose contents are graphical and respond to buttons, the keyboard and the mouse. In part 2 of this course, we will introduce more elements of programming (such as list, dictionaries, and loops) and then use these elements to create games such as Blackjack. Part 1 of this class will culminate in building a version of the classic arcade game "Asteroids". Upon completing this course, you will be able to write small, but interesting Python programs. The next course in the specialization will begin to introduce a more principled approach to writing programs and solving computational problems that will allow you to write larger and more complex programs.

4.3 Principles of Computing (Part 1)(计算原理上)

编程基础课程,这门课程聚焦在了编程的基础上,包括编码标准和测试,数学基础包括概率和组合等。

This two-part course builds upon the programming skills that you learned in our Introduction to Interactive Programming in Python course. We will augment those skills with both important programming practices and critical mathematical problem solving skills. These skills underlie larger scale computational problem solving and programming. The main focus of the class will be programming weekly mini-projects in Python that build upon the mathematical and programming principles that are taught in the class. To keep the class fun and engaging, many of the projects will involve working with strategy-based games. In part 1 of this course, the programming aspect of the class will focus on coding standards and testing. The mathematical portion of the class will focus on probability, combinatorics, and counting with an eye towards practical applications of these concepts in Computer Science. Recommended Background - Students should be comfortable writing small (100+ line) programs in Python using constructs such as lists, dictionaries and classes and also have a high-school math background that includes algebra and pre-calculus.

4.4 Principles of Computing (Part 2)(计算原理下)

编程基础课程,这门课程聚焦在搜索、排序、递归等主题上:

This two-part course introduces the basic mathematical and programming principles that underlie much of Computer Science. Understanding these principles is crucial to the process of creating efficient and well-structured solutions for computational problems. To get hands-on experience working with these concepts, we will use the Python programming language. The main focus of the class will be weekly mini-projects that build upon the mathematical and programming principles that are taught in the class. To keep the class fun and engaging, many of the projects will involve working with strategy-based games. In part 2 of this course, the programming portion of the class will focus on concepts such as recursion, assertions, and invariants. The mathematical portion of the class will focus on searching, sorting, and recursive data structures. Upon completing this course, you will have a solid foundation in the principles of computation and programming. This will prepare you for the next course in the specialization, which will begin to introduce a structured approach to developing and analyzing algorithms. Developing such algorithmic thinking skills will be critical to writing large scale software and solving real world computational problems.

4.5 Algorithmic Thinking (Part 1)(算法思维上)

编程基础课程,这门课程聚焦在算法思维的培养上,讲授图算法的相关概念并用Python实现:

Experienced Computer Scientists analyze and solve computational problems at a level of abstraction that is beyond that of any particular programming language. This two-part course builds on the principles that you learned in our Principles of Computing course and is designed to train students in the mathematical concepts and process of "Algorithmic Thinking", allowing them to build simpler, more efficient solutions to real-world computational problems. In part 1 of this course, we will study the notion of algorithmic efficiency and consider its application to several problems from graph theory. As the central part of the course, students will implement several important graph algorithms in Python and then use these algorithms to analyze two large real-world data sets. The main focus of these tasks is to understand interaction between the algorithms and the structure of the data sets being analyzed by these algorithms. Recommended Background - Students should be comfortable writing intermediate size (300+ line) programs in Python and have a basic understanding of searching, sorting, and recursion. Students should also have a solid math background that includes algebra, pre-calculus and a familiarity with the math concepts covered in "Principles of Computing".

4.6 Algorithmic Thinking (Part 2)(算法思维下)

编程基础课程,这门课程聚焦在培养学生的算法思维,并了解一些高级算法主题,例如分治法,动态规划等:

Experienced Computer Scientists analyze and solve computational problems at a level of abstraction that is beyond that of any particular programming language. This two-part class is designed to train students in the mathematical concepts and process of "Algorithmic Thinking", allowing them to build simpler, more efficient solutions to computational problems. In part 2 of this course, we will study advanced algorithmic techniques such as divide-and-conquer and dynamic programming. As the central part of the course, students will implement several algorithms in Python that incorporate these techniques and then use these algorithms to analyze two large real-world data sets. The main focus of these tasks is to understand interaction between the algorithms and the structure of the data sets being analyzed by these algorithms. Once students have completed this class, they will have both the mathematical and programming skills to analyze, design, and program solutions to a wide range of computational problems. While this class will use Python as its vehicle of choice to practice Algorithmic Thinking, the concepts that you will learn in this class transcend any particular programming language.

4.7 The Fundamentals of Computing Capstone Exam(计算基础毕业项目课程)

Python应用课程,基于以上子课程的学习,计算基础毕业项目课程将用Python和所学的知识完成 20+ 项目:

While most specializations on Coursera conclude with a project-based course, students in the "Fundamentals of Computing" specialization have completed more than 20+ projects during the first six courses of the specialization. Given that much of the material in these courses is reused from session to session, our goal in this capstone class is to provide a conclusion to the specialization that allows each student an opportunity to demonstrate their individual mastery of the material in the specialization. With this objective in mind, the focus in this Capstone class will be an exam whose questions are updated periodically. This approach is designed to help insure that each student is solving the exam problems on his/her own without outside help. For students that have done their own work, we do not anticipate that the exam will be particularly hard. However, those students who have relied too heavily on outside help in previous classes may have a difficult time. We believe that this approach will increase the value of the Certificate for this specialization.

5. 密歇根大学的 Applied Data Science with Python(Python数据科学应用专项课程系列)

Python应用系列课程,这个系列的目标主要是通过Python编程语言介绍数据科学的相关领域,包括应用统计学,机器学习,信息可视化,文本分析和社交网络分析等知识,并结合一些流行的Python工具包,例如pandas, matplotlib, scikit-learn, nltk以及networkx等Python工具。

The 5 courses in this University of Michigan specialization introduce learners to data science through the python programming language. This skills-based specialization is intended for learners who have basic a python or programming background, and want to apply statistical, machine learning, information visualization, text analysis, and social network analysis techniques through popular python toolkits such as pandas, matplotlib, scikit-learn, nltk, and networkx to gain insight into their data. Introduction to Data Science in Python (course 1), Applied Plotting, Charting & Data Representation in Python (course 2), and Applied Machine Learning in Python (course 3) should be taken in order and prior to any other course in the specialization. After completing those, courses 4 and 5 can be taken in any order. All 5 are required to earn a certificate.

这个系列课程有5门课程,包括Python数据科学导论课程(Introduction to Data Science in Python),Python数据可视化(Applied Plotting, Charting & Data Representation in Python),Python机器学习(Applied Machine Learning in Python) ,Python文本挖掘(Applied Text Mining in Python) , Python社交网络分析(Applied Social Network Analysis in Python),以下是具体子课程的介绍:

5.1 Introduction to Data Science in Python(Python数据科学导论)

Python基础和应用课程,这门课程从Python基础讲起,然后通过pandas数据科学库介绍DataFrame等数据分析中的核心数据结构概念,让学生学会操作和分析表格数据并学会运行基础的统计分析工具。

This course will introduce the learner to the basics of the python programming environment, including how to download and install python, expected fundamental python programming techniques, and how to find help with python programming questions. The course will also introduce data manipulation and cleaning techniques using the popular python pandas data science library and introduce the abstraction of the DataFrame as the central data structure for data analysis. The course will end with a statistics primer, showing how various statistical measures can be applied to pandas DataFrames. By the end of the course, students will be able to take tabular data, clean it, manipulate it, and run basic inferential statistical analyses. This course should be taken before any of the other Applied Data Science with Python courses: Applied Plotting, Charting & Data Representation in Python, Applied Machine Learning in Python, Applied Text Mining in Python, Applied Social Network Analysis in Python.

5.2 Applied Plotting, Charting & Data Representation in Python(Python数据可视化)

Python应用课程,这门课程聚焦在通过使用matplotlib库进行数据图表的绘制和可视化呈现:

This course will introduce the learner to information visualization basics, with a focus on reporting and charting using the matplotlib library. The course will start with a design and information literacy perspective, touching on what makes a good and bad visualization, and what statistical measures translate into in terms of visualizations. The second week will focus on the technology used to make visualizations in python, matplotlib, and introduce users to best practices when creating basic charts and how to realize design decisions in the framework. The third week will describe the gamut of functionality available in matplotlib, and demonstrate a variety of basic statistical charts helping learners to identify when a particular method is good for a particular problem. The course will end with a discussion of other forms of structuring and visualizing data. This course should be taken after Introduction to Data Science in Python and before the remainder of the Applied Data Science with Python courses: Applied Machine Learning in Python, Applied Text Mining in Python, and Applied Social Network Analysis in Python.

5.3 Applied Machine Learning in Python(Python机器学习)

Python应用课程,这门课程主要聚焦在通过Python应用机器学习,包括机器学习和统计学的区别,机器学习工具包scikit-learn的介绍,有监督学习和无监督学习,数据泛化问题(例如交叉验证和过拟合)等。

This course will introduce the learner to applied machine learning, focusing more on the techniques and methods than on the statistics behind these methods. The course will start with a discussion of how machine learning is different than descriptive statistics, and introduce the scikit learn toolkit. The issue of dimensionality of data will be discussed, and the task of clustering data, as well as evaluating those clusters, will be tackled. Supervised approaches for creating predictive models will be described, and learners will be able to apply the scikit learn predictive modelling methods while understanding process issues related to data generalizability (e.g. cross validation, overfitting). The course will end with a look at more advanced techniques, such as building ensembles, and practical limitations of predictive models. By the end of this course, students will be able to identify the difference between a supervised (classification) and unsupervised (clustering) technique, identify which technique they need to apply for a particular dataset and need, engineer features to meet that need, and write python code to carry out an analysis. This course should be taken after Introduction to Data Science in Python and Applied Plotting, Charting & Data Representation in Python and before Applied Text Mining in Python and Applied Social Analysis in Python.

5.4 Applied Text Mining in Python(Python文本挖掘)

Python应用课程,这门课程主要聚焦在文本挖掘和文本分析基础,包括正则表达式,文本清洗,文本预处理等,并结合NLTK讲授自然语言处理的相关知识,例如文本分类,主题模型等。

This course will introduce the learner to text mining and text manipulation basics. The course begins with an understanding of how text is handled by python, the structure of text both to the machine and to humans, and an overview of the nltk framework for manipulating text. The second week focuses on common manipulation needs, including regular expressions (searching for text), cleaning text, and preparing text for use by machine learning processes. The third week will apply basic natural language processing methods to text, and demonstrate how text classification is accomplished. The final week will explore more advanced methods for detecting the topics in documents and grouping them by similarity (topic modelling). This course should be taken after: Introduction to Data Science in Python, Applied Plotting, Charting & Data Representation in Python, and Applied Machine Learning in Python.

5.5 Applied Social Network Analysis in Python(Python社交网络分析)

Python应用课程,这门课程通过Python工具包 NetworkX 介绍社交网络分析的相关知识。

This course will introduce the learner to network analysis through the NetworkX library. The course begins with an understanding of what network analysis is and motivations for why we might model phenomena as networks. The second week introduces the concept of connectivity and network robustness.. The third week will explore ways of measuring the importance or centrality of a node in a network. The final week will explore the evolution of networks over time and cover models of network generation and the link prediction problem. This course should be taken after: Introduction to Data Science in Python, Applied Plotting, Charting & Data Representation in Python, and Applied Machine Learning in Python.

您可以继续在课程图谱上挖掘Coursera上新的Python课程,也欢迎推荐到这里。

注:本文首发“课程图谱博客”:http://blog.coursegraph.com ,同步发布到这里,原文链接地址:http://blog.coursegraph.com/coursera%E4%B8%8Apython%E8%AF%BE%E7%A8%8B%EF%BC%88%E5%85%AC%E5%BC%80%E8%AF%BE%EF%BC%89%E6%B1%87%E6%80%BB%E6%8E%A8%E8%8D%90%EF%BC%9A%E4%BB%8Epython%E5%85%A5%E9%97%A8%E5%88%B0%E5%BA%94%E7%94%A8python

深度学习课程及深度学习公开课资源整理

Deep Learning Specialization on Coursera

这里整理一批深度学习课程或者深度学习相关公开课的资源,持续更新,仅供参考。

1. Andrew Ng (吴恩达) 深度学习专项课程 by Coursera and deeplearning.ai

这是 Andrew Ng 老师离开百度后推出的第一个深度学习项目(deeplearning.ai)的一个课程: Deep Learning Specialization ,课程口号是:Master Deep Learning, and Break into AI. 作为 Coursera 联合创始人 和 机器学习网红课程 "Machine Learning" 的授课者,Andrew Ng 老师引领了数百万同学进入了机器学习领域,而这门深度学习课程的口号也透露了他的野心:继续带领百万人进入深度学习的圣地。

作为 Andrew Ng 老师的粉丝,依然推荐这门课程作为深度学习入门课程首选,并且建议花费上 Coursera 上的课程,一方面可以做题,另外还有证书,最重要的是它的编程作业,是理解课程内容的关键点,仅仅看视频绝对是达不到这个效果的。参考:《Andrew Ng 深度学习课程小记》和《Andrew Ng (吴恩达) 深度学习课程小结》。

2. Geoffrey Hinton 大神的 面向机器学习的神经网络(Neural Networks for Machine Learning)

Geoffrey Hinton大神的这门深度学习课程 2012年在 Coursera 上开过一轮,之后一直沉寂,直到 Coursera 新课程平台上线,这门课程已开过多轮次,来自课程图谱网友的评论:

"Deep learning必修课"

"宗派大师+开拓者直接讲课,秒杀一切二流子"

这门深度学习课程相对上面 Andrew Ng深度学习课程有一定难道,但是没有编程作业,只有Quiz.

3. 牛津大学深度学习课程(2015): Deep learning at Oxford 2015

这门深度学习课程名字虽然是 "Machine Learning 2014-2015",不过主要聚焦在深度学习的内容上,可以作为一门很系统的机器学习深度学习课程:

Machine learning techniques enable us to automatically extract features from data so as to solve predictive tasks, such as speech recognition, object recognition, machine translation, question-answering, anomaly detection, medical diagnosis and prognosis, automatic algorithm configuration, personalisation, robot control, time series forecasting, and much more. Learning systems adapt so that they can solve new tasks, related to previously encountered tasks, more efficiently.

The course focuses on the exciting field of deep learning. By drawing inspiration from neuroscience and statistics, it introduces the basic background on neural networks, back propagation, Boltzmann machines, autoencoders, convolutional neural networks and recurrent neural networks. It illustrates how deep learning is impacting our understanding of intelligence and contributing to the practical design of intelligent machines.

视频Playlist:https://www.youtube.com/playlist?list=PLE6Wd9FR--EfW8dtjAuPoTuPcqmOV53Fu

参考:“牛津大学Nando de Freitas主讲的机器学习课程,重点介绍深度学习,还请来Deepmind的Alex Graves和Karol Gregor客座报告,内容、讲解都属一流,强烈推荐! 云: http://t.cn/RA2vSNX

4. Udacity 深度学习(中/英)by Google

Udacity (优达学城)上由Google工程师主讲的免费深度学习课程,结合Google自己的深度学习工具 Tensorflow ,很不错:

机器学习是发展最快、最令人兴奋的领域之一,而深度学习则代表了机器学习中最前沿但也最有风险的一部分。在本课内容中,你将透彻理解深度学习的动机,并设计用于了解复杂和/或大量数据库的智能系统。

我们将教授你如何训练和优化基本神经网络、卷积神经网络和长短期记忆网络。你将通过项目和任务接触完整的机器学习系统 TensorFlow。你将学习解决一系列曾经以为非常具有挑战性的新问题,并在你用深度学习方法轻松解决这些问题的过程中更好地了解人工智能的复杂属性。

我们与 Google 的首席科学家兼 Google 智囊团技术经理 Vincent Vanhoucke 联合开发了本课内容。此课程提供中文版本。

5. Udacity 纳米基石学位项目:深度学习

Udacity的纳米基石学位项目,收费课程,不过据说更注重实战:

人工智能正颠覆式地改变着我们的世界,而背后推动这场进步的,正是深度学习技术。优达学城和硅谷技术明星一起,带来这门帮你系统性入门的课程。你将通过充满活力的硅谷课程内容、独家实战项目和专业代码审阅,快速掌握深度学习的基础知识和前沿应用。

你在实战项目中的每行代码都会获得专业审阅和反馈,还可以在同步学习小组中,接受学长、导师全程的辅导和督促

6. fast.ai 上的深度学习系列课程

fast.ai上提供了几门深度学习课程,课程标语很有意思:Making neural nets uncool again ,并且 Our courses (all are free and have no ads):

Deep Learning Part 1: Practical Deep Learning for Coders
Why we created the course
What we cover in the course
Deep Learning Part 2: Cutting Edge Deep Learning for Coders
Computational Linear Algebra: Online textbook and Videos
Providing a Good Education in Deep Learning—our teaching philosophy
A Unique Path to Deep Learning Expertise—our teaching approach

7. 台大李宏毅老师深度学习课程:Machine Learning and having it Deep and Structured

难得的免费中文深度学习课程:

课程主页:http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLDS17.html
课程视频Playlist: https://www.youtube.com/playlist?list=PLJV_el3uVTsPMxPbjeX7PicgWbY7F8wW9
B站搬运深度学习课程视频: https://www.bilibili.com/video/av9770302/

8. 台大陈缊侬老师深度学习应用课程:Applied Deep Learning / Machine Learning and Having It Deep and Structured

据说是美女老师,这门课程16年秋季开过一次,不过没有视频,最新的这期是17年秋季课程,刚刚开课,Youtube上正在陆续放出课程视频:

16年课程主页,有Slides等相关资料:https://www.csie.ntu.edu.tw/~yvchen/f105-adl/index.html
17年课程主页,资料正在陆续放出:https://www.csie.ntu.edu.tw/~yvchen/f106-adl/
Youtube视频,目前没有playlist,可以关注其官方号放出的视频:https://www.youtube.com/channel/UCyB2RBqKbxDPGCs1PokeUiA/videos

9. Yann Lecun 深度学习公开课

"Yann Lecun 在 2016 年初于法兰西学院开课,这是其中关于深度学习的 8 堂课。当时是用法语授课,后来加入了英文字幕。
作为人工智能领域大牛和 Facebook AI 实验室(FAIR)的负责人,Yann Lecun 身处业内机器学习研究的最前沿。他曾经公开表示,现有的一些机器学习公开课内容已经有些过时。通过 Yann Lecun 的课程能了解到近几年深度学习研究的最新进展。该系列可作为探索深度学习的进阶课程。"

10. 2016 年蒙特利尔深度学习暑期班

推荐理由:看看嘉宾阵容吧,Yoshua Bengio 教授循环神经网络,Surya Ganguli 教授理论神经科学与深度学习理论,Sumit Chopra 教授 reasoning summit 和 attention,Jeff Dean 讲解 TensorFlow 大规模机器学习,Ruslan Salakhutdinov 讲解学习深度生成式模型,Ryan Olson 讲解深度学习的 GPU 编程,等等。

11. 斯坦福大学深度学习应用课程:CS231n: Convolutional Neural Networks for Visual Recognition

这门面向计算机视觉的深度学习课程由Fei-Fei Li教授掌舵,内容面向斯坦福大学学生,货真价实,评价颇高:

Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka “deep learning”) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. The final assignment will involve training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet). We will focus on teaching how to set up the problem of image recognition, the learning algorithms (e.g. backpropagation), practical engineering tricks for training and fine-tuning the networks and guide the students through hands-on assignments and a final course project. Much of the background and materials of this course will be drawn from the ImageNet Challenge.

12. 斯坦福大学深度学习应用课程: Natural Language Processing with Deep Learning

这门课程由NLP领域的大牛 Chris Manning 和 Richard Socher 执掌,绝对是学习深度学习自然语言处理的不二法门。

Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models behind NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this winter quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a thorough introduction to cutting-edge research in deep learning applied to NLP. On the model side we will cover word vector representations, window-based neural networks, recurrent neural networks, long-short-term-memory models, recursive neural networks, convolutional neural networks as well as some recent models involving a memory component. Through lectures and programming assignments students will learn the necessary engineering tricks for making neural networks work on practical problems.

这门课程融合了两位授课者之前在斯坦福大学的授课课程,分别是自然语言处理课程 cs224n (Natural Language Processing)和面向自然语言处理的深度学习课程 cs224d (Deep Learning for Natural Language Processing).

13. 斯坦福大学深度学习课程: CS 20SI: Tensorflow for Deep Learning Research

准确的说,这门课程主要是针对深度学习工具Tensorflow的:

Tensorflow is a powerful open-source software library for machine learning developed by researchers at Google Brain. It has many pre-built functions to ease the task of building different neural networks. Tensorflow allows distribution of computation across different computers, as well as multiple CPUs and GPUs within a single machine. TensorFlow provides a Python API, as well as a less documented C++ API. For this course, we will be using Python.

This course will cover the fundamentals and contemporary usage of the Tensorflow library for deep learning research. We aim to help students understand the graphical computational model of Tensorflow, explore the functions it has to offer, and learn how to build and structure models best suited for a deep learning project. Through the course, students will use Tensorflow to build models of different complexity, from simple linear/logistic regression to convolutional neural network and recurrent neural networks with LSTM to solve tasks such as word embeddings, translation, optical character recognition. Students will also learn best practices to structure a model and manage research experiments.

14. 牛津大学 & DeepMind 联合的面向NLP的深度学习应用课程: Deep Learning for Natural Language Processing: 2016-2017

课程主页:https://www.cs.ox.ac.uk/teaching/courses/2016-2017/dl/

github课程项目页面:https://github.com/oxford-cs-deepnlp-2017/

课程视频Playlist: https://www.youtube.com/playlist?list=PL613dYIGMXoZBtZhbyiBqb0QtgK6oJbpm

B站搬运视频: https://www.bilibili.com/video/av9817911/

15. 卡耐基梅隆大学(CMU)深度学习应用课程:CMU CS 11-747, Fall 2017 Neural Networks for NLP

课程主页:http://phontron.com/class/nn4nlp2017/

课程视频Playlist: https://www.youtube.com/watch?v=Sss2EA4hhBQ&list=PL8PYTP1V4I8ABXzdqtOpB_eqBlVAz_xPT

16. MIT组织的一个为期一周的深度学习课程: 6.S191: Introduction to Deep Learning http://introtodeeplearning.com/

17. 奈良先端科学技術大学院大学(NAIST) 2014年推出的一个深度学习短期课程(英文授课):Deep Learning and Neural Networks

18. Deep Learning course: lecture slides and lab notebooks

欢迎大家推荐其他没有覆盖到的深度学习课程。

注:本文首发“课程图谱博客”:http://blog.coursegraph.com ,同步发布到这里,原文链接地址:http://blog.coursegraph.com/深度学习课程资源整理,转载请注明出处。

如何学习自然语言处理:一本书和一门课

Deep Learning Specialization on Coursera

关于“如何学习自然语言处理”,有很多同学通过不同的途径留过言,这方面虽然很早之前写过几篇小文章:《如何学习自然语言处理》和《几本自然语言处理入门书》,但是更推崇知乎上这个问答:自然语言处理怎么最快入门,里面有微软亚洲研究院周明老师的系统回答和清华大学刘知远老师的倾情奉献:初学者如何查阅自然语言处理(NLP)领域学术资料,当然还包括其他同学的无私分享。

不过,对于希望入门NLP的同学来说,推荐你们先看一下这本书: Speech and Language Processing,第一版中文名译为《自然语言处理综论》,作者都是NLP领域的大大牛:斯坦福大学 Dan Jurafsky 教授和科罗拉多大学的 James H. Martin 教授。这也是我当年的入门书,我读过这本书的中文版(翻译自第一版英文版)和英文版第二版,该书第三版正在撰写中,作者已经完成了不少章节的撰写,所完成的章节均可下载:Speech and Language Processing (3rd ed. draft)。从章节来看,第三版增加了不少和NLP相关的深度学习的章节,内容和篇幅相对于之前有了更多的更新:

Chapter Slides Relation to 2nd ed.
1: Introduction [Ch. 1 in 2nd ed.]
2: Regular Expressions, Text Normalization, and Edit Distance Text [pptx] [pdf]
Edit Distance [pptx] [pdf]
[Ch. 2 and parts of Ch. 3 in 2nd ed.]
3: Finite State Transducers
4: Language Modeling with N-Grams LM [pptx] [pdf] [Ch. 4 in 2nd ed.]
5: Spelling Correction and the Noisy Channel Spelling [pptx] [pdf] [expanded from pieces in Ch. 5 in 2nd ed.]
6: Naive Bayes Classification and Sentiment NB [pptx] [pdf]
Sentiment [pptx] [pdf]
[new in this edition]
7: Logistic Regression
8: Neural Nets and Neural Language Models
9: Hidden Markov Models [Ch. 6 in 2nd ed.]
10: Part-of-Speech Tagging [Ch. 5 in 2nd ed.]
11: Formal Grammars of English [Ch. 12 in 2nd ed.]
12: Syntactic Parsing [Ch. 13 in 2nd ed.]
13: Statistical Parsing
14: Dependency Parsing [new in this edition]
15: Vector Semantics Vector [pptx] [pdf] [expanded from parts of Ch. 19 and 20 in 2nd ed.]
16: Semantics with Dense Vectors Dense Vector [pptx] [pdf] [new in this edition]
17: Computing with Word Senses: WSD and WordNet Intro, Sim [pptx] [pdf]
WSD [pptx] [pdf]
[expanded from parts of Ch. 19 and 20 in 2nd ed.]
18: Lexicons for Sentiment and Affect Extraction SentLex [pptx] [pdf] [new in this edition]
19: The Representation of Sentence Meaning
20: Computational Semantics
21: Information Extraction [Ch. 22 in 2nd ed.]
22: Semantic Role Labeling and Argument Structure SRL [pptx] [pdf]
Select [pptx] [pdf]
[expanded from parts of Ch. 19 and 20 in 2nd ed.]
23: Neural Models of Sentence Meaning (RNN, LSTM, CNN, etc.)
24: Coreference Resolution and Entity Linking
25: Discourse Coherence
26: Seq2seq Models and Summarization
27: Machine Translation
28: Question Answering
29: Conversational Agents
30: Speech Recognition
31: Speech Synthesis

另外该书作者之一斯坦福大学 Dan Jurafsky 教授曾经在Coursera上开设过一门自然语言处理课程:Natural Language Processing,该课程目前貌似在Coursera新课程平台上已经查询不到,不过我们在百度网盘上做了一个备份,包括该课程视频和该书的第二版英文,两个一起看,效果更佳:

链接: https://pan.baidu.com/s/1kUCrV8r 密码: jghn 。

对于一直寻找如何入门自然语言处理的同学来说,先把这本书和这套课程拿下来才是一个必要条件,万事先有个基础。

同时欢迎大家关注我们的公众号:NLPJob,回复"slp"获取该书和课程最新资源。

自然语言处理工具包spaCy介绍

Deep Learning Specialization on Coursera

spaCy 是一个Python自然语言处理工具包,诞生于2014年年中,号称“Industrial-Strength Natural Language Processing in Python”,是具有工业级强度的Python NLP工具包。spaCy里大量使用了 Cython 来提高相关模块的性能,这个区别于学术性质更浓的Python NLTK,因此具有了业界应用的实际价值。

安装和编译 spaCy 比较方便,在ubuntu环境下,直接用pip安装即可:

sudo apt-get install build-essential python-dev git
sudo pip install -U spacy

不过安装完毕之后,需要下载相关的模型数据,以英文模型数据为例,可以用"all"参数下载所有的数据:

sudo python -m spacy.en.download all

或者可以分别下载相关的模型和用glove训练好的词向量数据:


# 这个过程下载英文tokenizer,词性标注,句法分析,命名实体识别相关的模型
python -m spacy.en.download parser

# 这个过程下载glove训练好的词向量数据
python -m spacy.en.download glove

下载好的数据放在spacy安装目录下的data里,以我的ubuntu为例:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data$ du -sh *
776M en-1.1.0
774M en_glove_cc_300_1m_vectors-1.0.0

进入到英文数据模型下:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data/en-1.1.0$ du -sh *
424M deps
8.0K meta.json
35M ner
12M pos
84K tokenizer
300M vocab
6.3M wordnet

可以用如下命令检查模型数据是否安装成功:


textminer@textminer:~$ python -c "import spacy; spacy.load('en'); print('OK')"
OK

也可以用pytest进行测试:


# 首先找到spacy的安装路径:
python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
/usr/local/lib/python2.7/dist-packages/spacy

# 再安装pytest:
sudo python -m pip install -U pytest

# 最后进行测试:
python -m pytest /usr/local/lib/python2.7/dist-packages/spacy --vectors --model --slow
============================= test session starts ==============================
platform linux2 -- Python 2.7.12, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: /usr/local/lib/python2.7/dist-packages/spacy, inifile:
collected 318 items

../../usr/local/lib/python2.7/dist-packages/spacy/tests/test_matcher.py ........
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_entity_id.py ....
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_matcher_bugfixes.py .....
......
../../usr/local/lib/python2.7/dist-packages/spacy/tests/vocab/test_vocab.py .......Xx
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_api.py x...............
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_home.py ............

============== 310 passed, 5 xfailed, 3 xpassed in 53.95 seconds ===============

现在可以快速测试一下spaCy的相关功能,我们以英文数据为例,spaCy目前主要支持英文和德文,对其他语言的支持正在陆续加入:


textminer@textminer:~$ ipython
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
Type "copyright", "credits" or "license" for more information.

IPython 2.4.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.

In [1]: import spacy

# 加载英文模型数据,稍许等待
In [2]: nlp = spacy.load('en')

Word tokenize功能,spaCy 1.2版本加了中文tokenize接口,基于Jieba中文分词:

In [3]: test_doc = nlp(u"it's word tokenize test for spacy")

In [4]: print(test_doc)
it's word tokenize test for spacy

In [5]: for token in test_doc:
print(token)
...:
it
's
word
tokenize
test
for
spacy

英文断句:


In [6]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')

In [7]: for sent in test_doc.sents:
print(sent)
...:
Natural language processing (NLP) deals with the application of computational models to text or speech data.
Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways.
NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form.
From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.


词干化(Lemmatize):


In [8]: test_doc = nlp(u"you are best. it is lemmatize test for spacy. I love these books")

In [9]: for token in test_doc:
print(token, token.lemma_, token.lemma)
...:
(you, u'you', 472)
(are, u'be', 488)
(best, u'good', 556)
(., u'.', 419)
(it, u'it', 473)
(is, u'be', 488)
(lemmatize, u'lemmatize', 1510296)
(test, u'test', 1351)
(for, u'for', 480)
(spacy, u'spacy', 173783)
(., u'.', 419)
(I, u'i', 570)
(love, u'love', 644)
(these, u'these', 642)
(books, u'book', 1011)

词性标注(POS Tagging):


In [10]: for token in test_doc:
print(token, token.pos_, token.pos)
....:
(you, u'PRON', 92)
(are, u'VERB', 97)
(best, u'ADJ', 82)
(., u'PUNCT', 94)
(it, u'PRON', 92)
(is, u'VERB', 97)
(lemmatize, u'ADJ', 82)
(test, u'NOUN', 89)
(for, u'ADP', 83)
(spacy, u'NOUN', 89)
(., u'PUNCT', 94)
(I, u'PRON', 92)
(love, u'VERB', 97)
(these, u'DET', 87)
(books, u'NOUN', 89)

命名实体识别(NER):


In [11]: test_doc = nlp(u"Rami Eid is studying at Stony Brook University in New York")

In [12]: for ent in test_doc.ents:
print(ent, ent.label_, ent.label)
....:
(Rami Eid, u'PERSON', 346)
(Stony Brook University, u'ORG', 349)
(New York, u'GPE', 350)

名词短语提取:


In [13]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')

In [14]: for np in test_doc.noun_chunks:
print(np)
....:
Natural language processing
Natural language processing (NLP) deals
the application
computational models
text
speech
data
Application areas
NLP
automatic (machine) translation
languages
dialogue systems
a human
a machine
natural language
information extraction
the goal
unstructured text
structured (database) representations
flexible ways
NLP technologies
a dramatic impact
the way
people
computers
the way
people
the use
language
the way
people
the vast amount
linguistic data
electronic form
a scientific viewpoint
NLP
fundamental questions
formal models
example
natural language phenomena
algorithms
these models

基于词向量计算两个单词的相似度:


In [15]: test_doc = nlp(u"Apples and oranges are similar. Boots and hippos aren't.")

In [16]: apples = test_doc[0]

In [17]: print(apples)
Apples

In [18]: oranges = test_doc[2]

In [19]: print(oranges)
oranges

In [20]: boots = test_doc[6]

In [21]: print(boots)
Boots

In [22]: hippos = test_doc[8]

In [23]: print(hippos)
hippos

In [24]: apples.similarity(oranges)
Out[24]: 0.77809414836023805

In [25]: boots.similarity(hippos)
Out[25]: 0.038474555379008429

当然,spaCy还包括句法分析的相关功能等。另外值得关注的是 spaCy 从1.0版本起,加入了对深度学习工具的支持,例如 Tensorflow 和 Keras 等,这方面具体可以参考官方文档给出的一个对情感分析(Sentiment Analysis)模型进行分析的例子:Hooking a deep learning model into spaCy.

参考:
spaCy官方文档
Getting Started with spaCy

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:自然语言处理工具包spaCy介绍 http://www.52nlp.cn/?p=9386

非主流自然语言处理——遗忘算法系列(四):改进TF-IDF权重公式

Deep Learning Specialization on Coursera

一、前言

  前文介绍了利用词库进行分词,本文介绍词库的另一个应用:词权重计算。

二、词权重公式

  1、公式的定义
    定义如下公式,用以计算词的权重:
    权重公式
  2、公式的由来
    在前文中,使用如下公式作为分词的依据:
    公式推导
    任给一个句子或文章,通过对最佳分词方案所对应的公式进行变换,可以得到:
    公式推导
    按前面权重公式的定义,上面的公式可以理解为:一个句子出现的概率对数等于句子中各词的权重之和。
    公式两边同时取负号使权重是个正值。

三、与TF-IDF的关系

  词频、逆文档频率(TF-IDF)在自然语言处理中,应用十分广泛,也是提取关键词的常用方法,公式如下:
  tf-idf
  从形式上看,该公式与我们定义的权重公式很像,而且用途也近似,那么它们之间有没有关系呢?
  答案是肯定的。
  我们知道,IDF是按文档为单位统计的,无论文档的长短,统一都按一篇计数,感觉这个统计的粒度还是比较粗的,有没有办法将文本的长短,这个明显相关的因素也考虑进去呢,让这个公式更加精细些?
  答案也是肯定的。
  文章是由词铺排而成,长短不同,所包含的词的个数也就有多有少。
  我们可以考虑在统计文档个数时,为每个文档引入包含多少个词这样一个权重,以区别长短不同的文档,沿着这个思路,改写一下IDF公式:
  tf-idf
  我们用所有文档中的词做成词库,那么上式中:
  tf-idf
  综合上面的推导过程,我们知道,本文所定义的词权重公式,本质上是tf-idf为长短文档引入权重的加强版,而该公式的应用也极为简单,只需要从词库中读取该词词频、词库总词频即可。
  时间复杂度最快可达O(1)级,比如词库以Hash表存储。
  关于TF-IDF更完整的介绍及主流用法,建议参看阮一峰老师的博文《TF-IDF与余弦相似性的应用(一):自动提取关键词》。

四、公式应用

    词权重用途很广,几乎词袋类算法中,都可以考虑使用。常见的应用有:
     1、关键词抽取、自动标签生成
        作法都很简单,分词后排除停用词,然后按权重值排序,取排在前面的若干个词即可。
     2、文本摘要
        完整的文本摘要功能实现很复杂也很困难,这里所指,仅是简单应用:由前面推导过程中可知,句子的权重等于分词结果各词的权重之和,从而获得句子的权重排序。
     3、相似度计算
        相似度计算,我们将在下一篇文中单独介绍。

五、演示程序

  在演示程序显示词库结果时,是按本文所介绍的权重公式排序的。
  演示程序与词库生成的相同:
  特别感谢:王斌老师指出,本文公式实质上是TF-ICF。

六、联系方式:

  1、QQ:老憨 244589712
  2、邮箱:gzdmcaoyc@163.com

非主流自然语言处理——遗忘算法系列(三):分词

Deep Learning Specialization on Coursera

一、前言

  前面介绍了词库的自动生成的方法,本文介绍如何利用前文所生成的词库进行分词。

二、分词的原理

  分词的原理,可以参看吴军老师《数学之美》中的相关章节,这里摘取Google黑板报版本中的部分:
  分词原理
  从上文中,可以知道分词的任务目标:给出一个句子S,找到一种分词方案,使下面公式中的P(S)最大:
  分词原理
  不过,联合概率求起来很困难,这种情况我们通常作马尔可夫假设,以简化问题,即:任意一个词wi的出现概率只同它前面的词 wi-1 有关。
  关于这个问题,吴军老师讲的深入浅出,整段摘录如下:
  分词原理
  另外,如果我们假设一个词与其他词都不相关,即相互独立时,此时公式最简,如下:
  分词原理
  这个假设分词无关的公式,也是本文所介绍的分词算法所使用的。

三、算法分析

  问:假设分词结果中各词相互无关是否可行?
  答:可行,前提是使用遗忘算法系列(二)中所述方法生成的词库,理由如下:
  分析ICTCLAS广受好评的分词系统的免费版源码,可以发现,在这套由张华平、刘群两位博士所开发分词系统的算法中假设了:分词结果中词只与其前面的一个词有关。
  回忆我们词库生成的过程可以知道,如果相邻的两个词紧密相关,那么这两个词会连为一个粗粒度的词被加入词库中,如:除“清华”、“大学”会是单独的词外,“清华大学”也会是一个词,分词过程中具体选用那种,则由它们的概率来决定。
  也就是说,我们在生成词库的同时,已经隐含的完成了相关性训练。
  关于ICTCLAS源码分析的文章,可以参看吕震宇博文:《天书般的ICTCLAS分词系统代码》。
  问:如何实现分词?
  答:基于前文生成的词库,我们可以假设分词结果相互无关,分词过程就比较简单,使用下面的步骤可以O(N)级时间,单遍扫描完成分词:
  逐字扫描句子,从词库中查出限定字长内,以该字结尾的所有词,分别计算其中的词与该词之前各词的概率乘积,取结果值最大的词,分别缓存下当前字所在位置的最大概率积,以及对应的分词结果。
  重复上面的步骤,直到句子扫描完毕,最后一字位置所得到即为整句分词结果。
  3、算法特点
    3.1、无监督学习;
    3.2、O(N)级时间复杂度;
    3.3、词库自维护,程序可无需人工参与的情况下,自行发现并添加新词、调整词频、清理错词、移除生僻词,保持词典大小适当;
    3.4、领域自适应:领域变化时,词条、词频自适应的随之调整;
    3.5、支持多语种混合分词。

四、演示程序下载

  演示程序与词库生成的相同:

五、联系方式:

  1、QQ:老憨 244589712
  2、邮箱:gzdmcaoyc@163.com

非主流自然语言处理——遗忘算法系列(二):大规模语料词库生成

Deep Learning Specialization on Coursera

一、前言

  本文介绍利用牛顿冷却模拟遗忘降噪,从大规模文本中无监督生成词库的方法。

二、词库生成

    算法分析,先来考虑以下几个问题
    问:目标是从文本中抽取词语,是否可以考虑使用遗忘的方法呢?
    答:可以,词语具备以相对稳定周期重复再现的特征,所以可以考虑使用遗忘的方法。这意味着,我们只需要找一种适当的方法,将句子划分成若干子串,这些子串即为“候选词”。在遗忘的作用下,如果“候选词”会周期性重现,那么它就会被保留在词库中,相反如果只是偶尔或随机出现,则会逐渐被遗忘掉。
    问:那用什么方法来把句子划分成子串比较合适呢?
    答:考察句中任意相邻的两个字,相邻两字有两种可能:要么同属于一个共同的词,要么是两个词的边界。我们都会有这样一种感觉,属于同一个词的相邻两字的“关系”肯定比属于不同词的相邻两字的“关系”要强烈一些。
    数学中并不缺少刻划“关系”的模型,这里我们选择公式简单并且参数容易统计的一种:如果两个字共现的概率大于它们随机排列在一起的概率,那么我们认为这两个字有关,反之则无关。
    如果相邻两字无关,就可以将两字中间断开。逐字扫描句子,如果相邻两字满足下面的公式,则将两字断开,如此可将句子切成若干子串,从而获得“候选词”集,判断公式如下图所示:
    切片公式
    公式中所需的参数可以通过统计获得:遍历一次语料,即可获得公式中所需的“单字的频数”、“相邻两字共现的频数”,以及“所有单字的频数总和”。
    问:如何计算遗忘剩余量?
    答:使用牛顿冷却公式,各参数在遗忘算法中的含义,如下图所示:
    牛顿冷却公式的详情说明,可以参考阮一峰老师的博文《基于用户投票的排名算法(四):牛顿冷却定律》。
    牛顿冷却公式
    问:参数中时间是用现实时间吗,遗忘系数取多少合适呢?
    答:a、关于时间:
      可以使用现实时间,遗忘的发生与现实同步。
      也可以考虑用处理语料中对象的数量来代替,这样仅当有数据处理时,才会发生遗忘。比如按处理的字数为计时单位,人阅读的速度约每秒5至7个字,当然每个人的阅读速度并不相同,这里的参数值要求并不需要特别严格。
      b、遗忘系数可以参考艾宾浩斯曲线中的实验值,如下图(来自互联网)
    艾宾浩斯遗忘曲线
      我们取6天记忆剩余量约为25.4%这个值,按每秒阅读7个字,将其代入牛顿冷却公式可以求得遗忘系数:
    遗忘系数
      注意艾宾浩斯曲线中的每组数值代入公式,所得的系数并不相同,会对词库的最大有效容量产生影响。

二、该算法生成词库的特点

    3.1、无监督学习
    3.2、O(N)级时间复杂度
    3.3、训练、执行为同一过程,可无缝处理流式数据
    3.4、未登录词、新词、登录词没有区别
    3.5、领域自适应:领域变化时,词条、词频自适应的随之调整
    3.6、算法中仅使用到频数这一语言的共性特征,无需对任何字符做特别处理,因此原理上跨语种。
三、词库成熟度
  由于每个词都具备一个相对稳定的重现周期,不难证明,当训练语料达到一定规模后,在遗忘的作用下,每个词的词频在衰减和累加会达到平衡,也即衰减的速度与增加的速度基本一致。成熟的词库,词频的波动相对会比较小,利用这个特征,我们可以衡量词库的成熟程度。
四、源码(C#)、演示程序下载
  使用内附语料(在“可直接运行的演示程序”下可以找到)生成词库效果如下:
  演示程序
五、联系方式:
  1、QQ:老憨 244589712
  2、邮箱:gzdmcaoyc@163.com

非主流自然语言处理——遗忘算法系列(一):算法概述

Deep Learning Specialization on Coursera

一、前言

  这里“遗忘”不是笔误,这个系列要讲的“遗忘算法”,是以牛顿冷却公式模拟遗忘为基础、用于自然语言处理(NLP)的一类方法的统称,而不是大名鼎鼎的“遗传算法”!

  在“遗忘”这条非主流自然语言处理路上,不知不觉已经摸索了三年有余,遗忘算法也算略成体系,虽然仍觉时机未到,还是决定先停一下,将脑中所积梳理成文,交由NLP的同好们点评交流。

二、遗忘算法原理

  能够从未知的事物中发现关联、提炼规律才是真正智能的标志,而遗忘正是使智能生物具备这一能力的工具,也是适应变化的利器,“遗忘”这一颇具负能量特征的家伙是如何实现发现规律这么个神奇魔法的呢?

  让我们从巴甫洛夫的狗说起:狗听到铃声就知道开饭了。

  铃声和开饭之间并不存在必然的联系,我们知道之所以狗会将两者联系在一起,是因为巴甫洛夫有意的将两者一次次在狗那儿重复共现。所以,重复是建立关联的必要条件。

  我们还可以想像,狗在进食的时候听到的声音可能还有鸟叫声、风吹树叶的沙沙声,为什么这些同样具备重复特征声音却没有和开饭建立关系呢?

  细分辨我们不难想到:铃声和开饭之间不仅重复共现,而且这种重复共现还具备一个相对稳定的周期,而其他的那些声音和开饭的共现则是随机的。

  那么遗忘又在其中如何起作用的呢?

    1、所有事物一视同仁的按相同的规律进行遗忘;

    2、偶尔或随机出现的事物因此会随时间而逐渐淡忘;

    3、而具有相对稳定周期重复再现的事物,虽然也按同样的规律遗忘,但由于周期性的得到补充,从而可以动态的保留在记忆中。

  在自然语言处理中,很多对象比如:词、词与词的关联、模板等,都具备按相对稳定重现的特征,因此非常适用遗忘来处理。

三、牛顿冷却公式

  那么,我们用什么来模拟遗忘呢?

  提到遗忘,很自然的会想到艾宾浩斯遗忘曲线,如果这条曲线有个函数形式,那么无疑是模拟遗忘的最佳建模选择。遗憾的是它只是一组离散的实验数据,但至少让我们知道,遗忘是呈指数衰减的。

  另外有一个事实,有的人记性好些,有的人记性则差些,不同人之间的遗忘曲线是不同的,但这并不会从本质上影响不同人对事物的认知,也就是说,如果存在一个遗忘函数,它首先是指数形式的,其次在实用过程中,该函数的系数并不那么重要。

  这提醒我们,可以尝试用一些指数形式的函数来代替遗忘曲线,然后用实践去检验,如果能满足工程实用就很好,这样的函数公式并不难找,比如:退火算法、半衰期公式等。

  有次在阮一峰老师的博客上看关于帖子热度排行的算法时,其中一种方法使用的是牛顿冷却定律,遗忘与冷却有着相似的过程、简洁优美的函数形式、而且参数只与时间相关,这些都让我本能想到,它就是我想要的“遗忘公式”。

  在实践检验中,牛顿冷却公式,确实有效好用,当然,不排除有其他更佳公式。

四、已经实现的功能

  如果把自然语言处理比作从矿砂中淘金子,那么业界主流算法的方向是从矿砂中将金砂挑出来,而遗忘算法的方向则是将砂石筛出去,虽然殊途但同归,所处理的任务也都是主流中所常见。

  本系列文章将逐一讲解遗忘算法如何以O(N)级算法性能实现:

  1、大规模语料词库生成
    1.1、跨语种,算法语种无关,比如:中日韩、少数民族等语种均可支持
    1.2、未登录词发现(只要符合按相对稳定周期性重现的词汇都会被收录)
    1.3、领域自适应,切换不同领域的训练文本时,词条、词频自行调整
    1.4、词典成熟度:可以知道当前语料训练出的词典的成熟程度

  2、分词(基于上述词库技术)
    2.1、成长性分词:用的越多,切的越准
    2.2、词典自维护:切词的同时动态维护词库的词条、词频、登录新词
    2.2、领域自适应、跨语种(继承自词库特性)

  3、词权值计算
    3.1、关键词提取、自动标签
    3.2、文章摘要
    3.3、长、短文本相似度计算
    3.4、主题词集

五、联系方式:

  1、QQ:老憨 244589712

  2、邮箱:gzdmcaoyc@163.com

Python自然语言处理实践: 在NLTK中使用斯坦福中文分词器

Deep Learning Specialization on Coursera

斯坦福大学自然语言处理组是世界知名的NLP研究小组,他们提供了一系列开源的Java文本分析工具,包括分词器(Word Segmenter),词性标注工具(Part-Of-Speech Tagger),命名实体识别工具(Named Entity Recognizer),句法分析器(Parser)等,可喜的事,他们还为这些工具训练了相应的中文模型,支持中文文本处理。在使用NLTK的过程中,发现当前版本的NLTK已经提供了相应的斯坦福文本处理工具接口,包括词性标注,命名实体识别和句法分析器的接口,不过可惜的是,没有提供分词器的接口。在google无果和阅读了相应的代码后,我决定照猫画虎为NLTK写一个斯坦福中文分词器接口,这样可以方便的在Python中调用斯坦福文本处理工具。

首先需要做一些准备工作,第一步当然是安装NLTK,这个可以参考我们在gensim的相关文章中的介绍《如何计算两个文档的相似度》,不过这里建议check github上最新的NLTK源代码并用“python setup.py install”的方式安装这个版本:https://github.com/nltk/nltk。这个版本新增了对于斯坦福句法分析器的接口,一些老的版本并没有,这个之后我们也许还会用来介绍。而我们也是在这个版本中添加的斯坦福分词器接口,其他版本也许会存在一些小问题。其次是安装Java运行环境,以Ubuntu 12.04为例,安装Java运行环境仅需要两步:

sudo apt-get install default-jre
sudo apt-get install default-jdk

最后,当然是最重要的,你需要下载斯坦福分词器的相应文件,包括源代码,模型文件,词典文件等。注意斯坦福分词器并不仅仅支持中文分词,还支持阿拉伯语的分词,需要下载的zip打包文件是这个: Download Stanford Word Segmenter version 2014-08-27,下载后解压。

准备工作就绪后,我们首先考虑的是在nltk源代码里的什么地方来添加这个接口文件。在nltk源代码包下,斯坦福词性标注器和命名实体识别工具的接口文件是这个:nltk/tag/stanford.py ,而句法分析器的接口文件是这个:nltk/parse/stanford.py , 虽然在nltk/tokenize/目录下有一个stanford.py文件,但是仅仅提供了一个针对英文的tokenizer工具PTBTokenizer的接口,没有针对斯坦福分词器的接口,于是我决定在nltk/tokenize下添加一个stanford_segmenter.py文件,作为nltk斯坦福中文分词器的接口文件。NLTK中的这些接口利用了Linux 下的管道(PIPE)机制和subprocess模块,这里直接贴源代码了,感兴趣的同学可以自行阅读:
继续阅读

如何计算两个文档的相似度(三)

Deep Learning Specialization on Coursera

上一节我们用了一个简单的例子过了一遍gensim的用法,这一节我们将用课程图谱的实际数据来做一些验证和改进,同时会用到NLTK来对课程的英文数据做预处理。

三、课程图谱相关实验

1、数据准备
为了方便大家一起来做验证,这里准备了一份Coursera的课程数据,可以在这里下载:coursera_corpus,(百度网盘链接: http://t.cn/RhjgPkv,密码: oppc)总共379个课程,每行包括3部分内容:课程名\t课程简介\t课程详情, 已经清除了其中的html tag, 下面所示的例子仅仅是其中的课程名:

Writing II: Rhetorical Composing
Genetics and Society: A Course for Educators
General Game Playing
Genes and the Human Condition (From Behavior to Biotechnology)
A Brief History of Humankind
New Models of Business in Society
Analyse Numérique pour Ingénieurs
Evolution: A Course for Educators
Coding the Matrix: Linear Algebra through Computer Science Applications
The Dynamic Earth: A Course for Educators
...

好了,首先让我们打开Python, 加载这份数据:

>>> courses = [line.strip() for line in file('coursera_corpus')]
>>> courses_name = [course.split('\t')[0] for course in courses]
>>> print courses_name[0:10]
['Writing II: Rhetorical Composing', 'Genetics and Society: A Course for Educators', 'General Game Playing', 'Genes and the Human Condition (From Behavior to Biotechnology)', 'A Brief History of Humankind', 'New Models of Business in Society', 'Analyse Num\xc3\xa9rique pour Ing\xc3\xa9nieurs', 'Evolution: A Course for Educators', 'Coding the Matrix: Linear Algebra through Computer Science Applications', 'The Dynamic Earth: A Course for Educators']

2、引入NLTK
NTLK是著名的Python自然语言处理工具包,但是主要针对的是英文处理,不过课程图谱目前处理的课程数据主要是英文,因此也足够了。NLTK配套有文档,有语料库,有书籍,甚至国内有同学无私的翻译了这本书: 用Python进行自然语言处理,有时候不得不感慨:做英文自然语言处理的同学真幸福。

首先仍然是安装NLTK,在NLTK的主页详细介绍了如何在Mac, Linux和Windows下安装NLTK:http://nltk.org/install.html ,最主要的还是要先装好依赖NumPy和PyYAML,其他没什么问题。安装NLTK完毕,可以import nltk测试一下,如果没有问题,还有一件非常重要的工作要做,下载NLTK官方提供的相关语料:

>>> import nltk
>>> nltk.download()

这个时候会弹出一个图形界面,会显示两份数据供你下载,分别是all-corpora和book,最好都选定下载了,这个过程需要一段时间,语料下载完毕后,NLTK在你的电脑上才真正达到可用的状态,可以测试一下布朗语料库

>>> from nltk.corpus import brown
>>> brown.readme()
'BROWN CORPUS\n\nA Standard Corpus of Present-Day Edited American\nEnglish, for use with Digital Computers.\n\nby W. N. Francis and H. Kucera (1964)\nDepartment of Linguistics, Brown University\nProvidence, Rhode Island, USA\n\nRevised 1971, Revised and Amplified 1979\n\nhttp://www.hit.uib.no/icame/brown/bcm.html\n\nDistributed with the permission of the copyright holder,\nredistribution permitted.\n'
>>> brown.words()[0:10]
['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of']
>>> brown.tagged_words()[0:10]
[('The', 'AT'), ('Fulton', 'NP-TL'), ('County', 'NN-TL'), ('Grand', 'JJ-TL'), ('Jury', 'NN-TL'), ('said', 'VBD'), ('Friday', 'NR'), ('an', 'AT'), ('investigation', 'NN'), ('of', 'IN')]
>>> len(brown.words())
1161192

现在我们就来处理刚才的课程数据,如果按此前的方法仅仅对文档的单词小写化的话,我们将得到如下的结果:

>>> texts_lower = [[word for word in document.lower().split()] for document in courses]
>>> print texts_lower[0]
['writing', 'ii:', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading,', 'research,', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic,', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words,', 'ideas,', 'talents,', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

注意其中很多标点符号和单词是没有分离的,所以我们引入nltk的word_tokenize函数,并处理相应的数据:

>>> from nltk.tokenize import word_tokenize
>>> texts_tokenized = [[word.lower() for word in word_tokenize(document.decode('utf-8'))] for document in courses]
>>> print texts_tokenized[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'you', 'in', 'a', 'series', 'of', 'interactive', 'reading', ',', 'research', ',', 'and', 'composing', 'activities', 'along', 'with', 'assignments', 'designed', 'to', 'help', 'you', 'become', 'more', 'effective', 'consumers', 'and', 'producers', 'of', 'alphabetic', ',', 'visual', 'and', 'multimodal', 'texts.', 'join', 'us', 'to', 'become', 'more', 'effective', 'writers', '...', 'and', 'better', 'citizens.', 'rhetorical', 'composing', 'is', 'a', 'course', 'where', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'and', 'support.', 'you', 'will', 'be', 'introduced', 'to', 'a', ...

对课程的英文数据进行tokenize之后,我们需要去停用词,幸好NLTK提供了一份英文停用词数据:

>>> from nltk.corpus import stopwords
>>> english_stopwords = stopwords.words('english')
>>> print english_stopwords
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers', 'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now']
>>> len(english_stopwords)
127

总计127个停用词,我们首先过滤课程语料中的停用词:
>>> texts_filtered_stopwords = [[word for word in document if not word in english_stopwords] for document in texts_tokenized]
>>> print texts_filtered_stopwords[0]
['writing', 'ii', ':', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', ',', 'research', ',', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', ',', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', ',', 'ideas', ',', 'talents', ',', 'support.', 'introduced', 'variety', 'rhetorical', 'concepts\xe2\x80\x94that', ',', 'ideas', 'techniques', 'inform', 'persuade', 'audiences\xe2\x80\x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', ',', 'visual', ',', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', ',', 'demonstrations', ',', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', ',', 'rhetoric', 'course', 'design', ',', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', ',', 'students', ',', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', ',', 'writers', 'exchange', ',', 'place', 'exchange', 'work', 'feedback']

停用词被过滤了,不过发现标点符号还在,这个好办,我们首先定义一个标点符号list:
>>> english_punctuations = [',', '.', ':', ';', '?', '(', ')', '[', ']', '&', '!', '*', '@', '#', '$', '%']

然后过滤这些标点符号:
>>> texts_filtered = [[word for word in document if not word in english_punctuations] for document in texts_filtered_stopwords]
>>> print texts_filtered[0]
['writing', 'ii', 'rhetorical', 'composing', 'rhetorical', 'composing', 'engages', 'series', 'interactive', 'reading', 'research', 'composing', 'activities', 'along', 'assignments', 'designed', 'help', 'become', 'effective', 'consumers', 'producers', 'alphabetic', 'visual', 'multimodal', 'texts.', 'join', 'us', 'become', 'effective', 'writers', '...', 'better', 'citizens.', 'rhetorical', 'composing', 'course', 'writers', 'exchange', 'words', 'ideas', 'talents', 'support.', 'introduced', 'variety', 'rhetorical', 'concepts\xe2\x80\x94that', 'ideas', 'techniques', 'inform', 'persuade', 'audiences\xe2\x80\x94that', 'help', 'become', 'effective', 'consumer', 'producer', 'written', 'visual', 'multimodal', 'texts.', 'class', 'includes', 'short', 'videos', 'demonstrations', 'activities.', 'envision', 'rhetorical', 'composing', 'learning', 'community', 'includes', 'enrolled', 'course', 'instructors.', 'bring', 'expertise', 'writing', 'rhetoric', 'course', 'design', 'designed', 'assignments', 'course', 'infrastructure', 'help', 'share', 'experiences', 'writers', 'students', 'professionals', 'us.', 'collaborations', 'facilitated', 'wex', 'writers', 'exchange', 'place', 'exchange', 'work', 'feedback']

更进一步,我们对这些英文单词词干化(Stemming),NLTK提供了好几个相关工具接口可供选择,具体参考这个页面: http://nltk.org/api/nltk.stem.html , 可选的工具包括Lancaster Stemmer, Porter Stemmer等知名的英文Stemmer。这里我们使用LancasterStemmer:

>>> from nltk.stem.lancaster import LancasterStemmer
>>> st = LancasterStemmer()
>>> st.stem('stemmed')
'stem'
>>> st.stem('stemming')
'stem'
>>> st.stem('stemmer')
'stem'
>>> st.stem('running')
'run'
>>> st.stem('maximum')
'maxim'
>>> st.stem('presumably')
'presum'

让我们调用这个接口来处理上面的课程数据:
>>> texts_stemmed = [[st.stem(word) for word in docment] for docment in texts_filtered]
>>> print texts_stemmed[0]
['writ', 'ii', 'rhet', 'compos', 'rhet', 'compos', 'eng', 'sery', 'interact', 'read', 'research', 'compos', 'act', 'along', 'assign', 'design', 'help', 'becom', 'effect', 'consum', 'produc', 'alphabet', 'vis', 'multimod', 'texts.', 'join', 'us', 'becom', 'effect', 'writ', '...', 'bet', 'citizens.', 'rhet', 'compos', 'cours', 'writ', 'exchang', 'word', 'idea', 'tal', 'support.', 'introduc', 'vary', 'rhet', 'concepts\xe2\x80\x94that', 'idea', 'techn', 'inform', 'persuad', 'audiences\xe2\x80\x94that', 'help', 'becom', 'effect', 'consum', 'produc', 'writ', 'vis', 'multimod', 'texts.', 'class', 'includ', 'short', 'video', 'demonst', 'activities.', 'envid', 'rhet', 'compos', 'learn', 'commun', 'includ', 'enrol', 'cours', 'instructors.', 'bring', 'expert', 'writ', 'rhet', 'cours', 'design', 'design', 'assign', 'cours', 'infrastruct', 'help', 'shar', 'expery', 'writ', 'stud', 'profess', 'us.', 'collab', 'facilit', 'wex', 'writ', 'exchang', 'plac', 'exchang', 'work', 'feedback']

在我们引入gensim之前,还有一件事要做,去掉在整个语料库中出现次数为1的低频词,测试了一下,不去掉的话对效果有些影响:

>>> all_stems = sum(texts_stemmed, [])
>>> stems_once = set(stem for stem in set(all_stems) if all_stems.count(stem) == 1)
>>> texts = [[stem for stem in text if stem not in stems_once] for text in texts_stemmed]

3、引入gensim
有了上述的预处理,我们就可以引入gensim,并快速的做课程相似度的实验了。以下会快速的过一遍流程,具体的可以参考上一节的详细描述。

>>> from gensim import corpora, models, similarities
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)

>>> dictionary = corpora.Dictionary(texts)
2013-06-07 21:37:07,120 : INFO : adding document #0 to Dictionary(0 unique tokens)
2013-06-07 21:37:07,263 : INFO : built Dictionary(3341 unique tokens) from 379 documents (total 46417 corpus positions)

>>> corpus = [dictionary.doc2bow(text) for text in texts]

>>> tfidf = models.TfidfModel(corpus)
2013-06-07 21:58:30,490 : INFO : collecting document frequencies
2013-06-07 21:58:30,490 : INFO : PROGRESS: processing document #0
2013-06-07 21:58:30,504 : INFO : calculating IDF weights for 379 documents and 3341 features (29166 matrix non-zeros)

>>> corpus_tfidf = tfidf[corpus]

这里我们拍脑门决定训练topic数量为10的LSI模型:
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=10)

>>> index = similarities.MatrixSimilarity(lsi[corpus])
2013-06-07 22:04:55,443 : INFO : scanning corpus to determine the number of features
2013-06-07 22:04:55,510 : INFO : creating matrix for 379 documents and 10 features

基于LSI模型的课程索引建立完毕,我们以Andrew Ng教授的机器学习公开课为例,这门课程在我们的coursera_corpus文件的第211行,也就是:

>>> print courses_name[210]
Machine Learning

现在我们就可以通过lsi模型将这门课程映射到10个topic主题模型空间上,然后和其他课程计算相似度:
>>> ml_course = texts[210]
>>> ml_bow = dicionary.doc2bow(ml_course)
>>> ml_lsi = lsi[ml_bow]
>>> print ml_lsi
[(0, 8.3270084238788673), (1, 0.91295652151975082), (2, -0.28296075112669405), (3, 0.0011599008827843801), (4, -4.1820134980024255), (5, -0.37889856481054851), (6, 2.0446999575052125), (7, 2.3297944485200031), (8, -0.32875594265388536), (9, -0.30389668455507612)]
>>> sims = index[ml_lsi]
>>> sort_sims = sorted(enumerate(sims), key=lambda item: -item[1])

取按相似度排序的前10门课程:
>>> print sort_sims[0:10]
[(210, 1.0), (174, 0.97812241), (238, 0.96428639), (203, 0.96283489), (63, 0.9605484), (189, 0.95390636), (141, 0.94975704), (184, 0.94269753), (111, 0.93654782), (236, 0.93601125)]

第一门课程是它自己:
>>> print courses_name[210]
Machine Learning

第二门课是Coursera上另一位大牛Pedro Domingos机器学习公开课
>>> print courses_name[174]
Machine Learning

第三门课是Coursera的另一位创始人,同样是大牛的Daphne Koller教授的概率图模型公开课
>>> print courses_name[238]
Probabilistic Graphical Models

第四门课是另一位超级大牛Geoffrey Hinton的神经网络公开课,有同学评价是Deep Learning的必修课。
>>> print courses_name[203]
Neural Networks for Machine Learning

感觉效果还不错,如果觉得有趣的话,也可以动手试试。

好了,这个系列就到此为止了,原计划写一下在英文维基百科全量数据上的实验,因为课程图谱目前暂时不需要,所以就到此为止,感兴趣的同学可以直接阅读gensim上的相关文档,非常详细。之后我可能更关注将NLTK应用到中文信息处理上,欢迎关注。

注:原创文章,转载请注明出处“我爱自然语言处理”:www.52nlp.cn

本文链接地址:http://www.52nlp.cn/如何计算两个文档的相似度三