# Coursera上博弈论相关课程（公开课）汇总推荐

1. 斯坦福大学的 博弈论（Game Theory）

This course is aimed at students, researchers, and practitioners who wish to understand more about strategic interactions. You must be comfortable with mathematical thinking and rigorous arguments. Relatively little specific math is required; but you should be familiar with basic probability theory (for example, you should know what a conditional probability is), and some very light calculus would be helpful.

Popularized by movies such as "A Beautiful Mind", game theory is the mathematical modeling of strategic interaction among rational (and irrational) agents. Over four weeks of lectures, this advanced course considers how to design interactions between agents in order to achieve good social outcomes. Three main topics are covered: social choice theory (i.e., collective decision making and voting systems), mechanism design, and auctions. In the first week we consider the problem of aggregating different agents' preferences, discussing voting rules and the challenges faced in collective decision making. We present some of the most important theoretical results in the area: notably, Arrow's Theorem, which proves that there is no "perfect" voting system, and also the Gibbard-Satterthwaite and Muller-Satterthwaite Theorems. We move on to consider the problem of making collective decisions when agents are self interested and can strategically misreport their preferences. We explain "mechanism design" -- a broad framework for designing interactions between self-interested agents -- and give some key theoretical results. Our third week focuses on the problem of designing mechanisms to maximize aggregate happiness across agents, and presents the powerful family of Vickrey-Clarke-Groves mechanisms. The course wraps up with a fourth week that considers the problem of allocating scarce resources among self-interested agents, and that provides an introduction to auction theory.

3. 东京大学的 博弈论入门课程（Welcome to Game Theory）

This course provides a brief introduction to game theory. Our main goal is to understand the basic ideas behind the key concepts in game theory, such as equilibrium, rationality, and cooperation. The course uses very little mathematics, and it is ideal for those who are looking for a conceptual introduction to game theory. Business competition, political campaigns, the struggle for existence by animals and plants, and so on, can all be regarded as a kind of “game,” in which individuals try to do their best against others. Game theory provides a general framework to describe and analyze how individuals behave in such “strategic” situations. This course focuses on the key concepts in game theory, and attempts to outline the informal basic ideas that are often hidden behind mathematical definitions. Game theory has been applied to a number of disciplines, including economics, political science, psychology, sociology, biology, and computer science. Therefore, a warm welcome is extended to audiences from all fields who are interested in what game theory is all about.

# 斯坦福大学深度学习与自然语言处理第四讲：词窗口分类和神经网络

1. [UFLDL tutorial]
2. [Learning Representations by Backpropogating Errors]
3. 第四讲Slides [slides]
4. 第四讲视频 [video]

# 斯坦福大学深度学习与自然语言处理第三讲：高级的词向量表示

1. Paper1：[GloVe: Global Vectors for Word Representation]
2. Paper2：[Improving Word Representations via Global Context and Multiple Word Prototypes]
3. Notes：[Lecture Notes 2]
4. 第三讲Slides [slides]
5. 第三讲视频 [video]

# 斯坦福大学深度学习与自然语言处理第二讲：词向量

1. Paper1：[Distributed Representations of Words and Phrases and their Compositionality]]
2. Paper2：[Efficient Estimation of Word Representations in Vector Space]
3. 第二讲Slides [slides]
4. 第二讲视频 [video]

# 斯坦福大学机器学习第八课“神经网络的表示(Neural Networks: Representation)”

1)  Non-linear hypotheses (非线性hypotheses)

2)  Neurons and the brain (神经元和大脑)

3)  Model representation I (模型表示一)

4)  Model representation II (模型表示二)

5)  Examples and intuitions I (例子和直观解释一)

6)  Examples and intuitions II (例子和直观解释二)

7)  Multi-class classification (多类分类问题)

1)  Non-linear hypotheses (非线性hypotheses)

2)  Neurons and the brain (神经元和大脑)

• 起源于尝试让机器模仿大脑的算法；
• 在80年代和90年代早期非常流行，慢慢在90年代后期衰落；
• 最近得益于计算机硬件能力，又开始流行起来：对于很多应用，神经网络算法是一种“时髦”的技术；

3)  Model representation I (模型表示一)

$a^{j}_i$ = j层第i个单元的激活函数

$\Theta^{(j)}$ = 从第j层映射到第j+1层的控制函数的权重矩阵

4)  Model representation II (模型表示二)

5)  Examples and intuitions I (例子和直观解释一)

6)  Examples and intuitions II (例子和直观解释二)

7)  Multi-class classification (多类分类问题)

http://en.wikipedia.org/wiki/Neural_network

http://en.wikipedia.org/wiki/Artificial_neural_network

## 神经网络编程入门

### 神经网络入门连载

http://library.thinkquest.org/29483/neural_index.shtml

http://home.agh.edu.pl/~vlsi/AI/xor_t/en/main.htm

http://en.wikipedia.org/wiki/NOR_logic

http://en.wikipedia.org/wiki/Logic_gate

# Coursera公开课笔记: 斯坦福大学机器学习第七课“正则化(Regularization)”

1)  The Problem of Overfitting(过拟合问题)

2)  Cost Function(成本函数)

3)  Regularized Linear Regression(线性回归的正则化)

4)  Regularized Logistic Regression(逻辑回归的正则化)

1)  The Problem of Overfitting(过拟合问题)

a) 欠拟合(underfit, 也称High-bias)

b) 合适的拟合：

c) 过拟合(overfit,也称High variance)

a) 欠拟合

b) 合适的拟合

c) 过拟合

a) 减少特征的数量：

-人工的选择保留哪些特征；

-模型选择算法（之后的课程会介绍）

b) 正则化

-保留所有的特征，但是降低参数$\theta_j$的量/值；

-正则化的好处是当特征很多时，每一个特征都会对预测y贡献一份合适的力量；

2)  Cost Function(成本函数)

a) 合适的拟合：

b) 过拟合

-“简化”的hypothesis；

-不容易过拟合；

-特征包括：$x_1, x_2, ... , x_{100}$

-参数包括：$\theta_0, \theta_1, ..., \theta_n$

-算法依然会正常的工作, 将 $\lambda$设置的很大不会影响算法本身；

-算法在去除过拟合问题上会失败；

-算法的结构将是欠拟合（underfitting),即使训练数据非常好也会失败；

-梯度下降算法不一定会收敛；

3)  Regularized Linear Regression(线性回归的正则化)

X 是m * (n+1)矩阵

y是m维向量：

4)  Regularized Logistic Regression(逻辑回归的正则化)

??Hypothesis?????

?????????Cost Function???

?????????

??" />$\theta).

??????????????????????????

??Hypothesis?????

?????????Cost Function???

?????????

??$

h_\theta(x) = \frac{1}{1+e^{-\theta^Tx}}.

http://en.wikipedia.org/wiki/Regularization_%28mathematics%29

http://en.wikipedia.org/wiki/Overfitting

# Coursera公开课笔记: 斯坦福大学机器学习第四课“多变量线性回归(Linear Regression with Multiple Variables)”

1) Multiple features(多维特征)

2) Gradient descent for multiple variables(梯度下降在多变量线性回归中的应用)

3) Gradient descent in practice I: Feature Scaling(梯度下降实践1：特征归一化)

4) Gradient descent in practice II: Learning rate(梯度下降实践2：步长的选择)

5) Features and polynomial regression(特征及多项式回归)

6) Normal equation(正规方程-区别于迭代方法的直接解法)

7) Normal equation and non-invertibility (optional)(正规方程在矩阵不可逆情况下的解决方法)

# Coursera公开课笔记: 斯坦福大学机器学习第二课“单变量线性回归(Linear regression with one variable)”

1) Model representation(模型表示)

2) Cost function(代价函数，成本函数)

3) Cost function intuition I(直观解释1)

4) Cost function intuition II(直观解释2)

7) Gradient descent for linear regression(应用于线性回归的的梯度下降算法)

# Coursera公开课笔记: 斯坦福大学机器学习第一课“引言(Introduction)”

Coursera上于4月23号启动了6门公开课，其中包括斯坦福大学于“机器学习”课程，由机器学习领域的大牛Andrew Ng教授授课：

https://www.coursera.org/course/ml

Coursera上机器学习的课程学习过程是这样的：看Andrew Ng教授的授课视频或者看看课程相关的ppt；答系统随机出的题，一般5道题，单选、多选甚至填空，满分5分；编程作业，需用Octave(和 Matlab相似的开源编程语言)完成，提交给系统得分，在规定时间内完成，均取最高分，超过规定时间会对得分打折。

• Introduction(引言)
• Linear Regression with One Variable(单变量线性回归)
• (Optional) Linear Algebra Review(线性代数回顾)(对于线性代数熟悉的同学可以选修)
4月30日是答题(Review Questions)截至时间。