标签归档:深度学习

从零开始搭建深度学习服务器: 深度学习工具安装(TensorFlow + PyTorch + Torch)

Deep Learning Specialization on Coursera

这个系列写了好几篇文章,这是相关文章的索引,仅供参考:

以下是相关深度学习工具包的安装,包括Tensorflow, PyTorch, Torch等:

1. TensorFlow:

首先安装libcupti-dev

sudo apt-get install libcupti-dev

然后用 virtualenv 方式安装 Tensorflow(当前是1.4版本)

sudo apt-get install python-pip python-dev python-virtualenv 
mkdir tensorflow
cd tensorflow
virtualenv --system-site-packages venv
source venv/bin/activate
pip install --upgrade tensorflow-gpu

测试GPU:

Python 2.7.12 (default, Nov 19 2016, 06:48:10) 
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
...
2017-10-24 20:37:24.290049: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6575
pciBusID 0000:01:00.0
Total memory: 10.91GiB
Free memory: 10.52GiB
...
2017-10-24 20:37:24.387363: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6575
pciBusID 0000:02:00.0
Total memory: 10.91GiB
Free memory: 10.76GiB
2017-10-24 20:37:24.388168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1 
2017-10-24 20:37:24.388176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y Y 
2017-10-24 20:37:24.388179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1:   Y Y 
2017-10-24 20:37:24.388186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0)
2017-10-24 20:37:24.388189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0)
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0
/job:localhost/replica:0/task:0/gpu:1 -> device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0
2017-10-24 20:37:24.449867: I tensorflow/core/common_runtime/direct_session.cc:300] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0
/job:localhost/replica:0/task:0/gpu:1 -> device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0
>>> 

2. PyTorch:

首先在PyTorch的官网下载对应的pip安装文件:

然后用virtualenv的方式安装,非常方便:

mkdir pytorch
cd pytorch/
virtualenv venv
source venv/bin/activate
pip install /path/to/torch-0.2.0.post3-cp27-cp27mu-manylinux1_x86_64.whl 
pip install torchvision 

3. Torch

首先按照Torch官方的方法进行安装:http://torch.ch/docs/getting-started.html

git clone https://github.com/torch/distro.git ~/torch --recursive
cd ~/torch; bash install-deps;
./install.sh

如无意外,可以顺利安装,如果遇到了如下两个问题,可按下述方法修改:

1) 执行./install.sh时出现Moses>=1.错误

Missing dependencies for nn:moses >= 1.,有时候执行./install.sh时,会出现这个问题。

用这个方法解决:

sudo apt install luarocks
sudo luarocks install moses

2) install.sh 过程中提示“error -- unsupported GNU version! gcc versions later than 5 are not supported!”

ubuntu17.04自带gcc 6.x 版本,所以降级安装gcc 4.9版本解决问题:

sudo apt-get install g++-4.9  
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 20  
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 20 

成功执行安装脚本后后提示:

Do you want to automatically prepend the Torch install location
to PATH and LD_LIBRARY_PATH in your /home/yourpath/.bashrc? (yes/no)
[yes] >>>
yes

安装脚本会自动将torch的安装路径写入到 .bashrc里,然后输入 th试试:

如果你想用Lua5.2替代LuaJIT的方式安装Torch(If you want to install torch with Lua 5.2 instead of LuaJIT, simply run),可按如下方式安装:

git clone https://github.com/torch/distro.git torch --recursive
cd torch

# clean old torch installation
./clean.sh

在 ~/.bashrec中设置lua的环境:
TORCH_LUA_VERSION=LUA52
并执行 source ~/.bashrc, 然后运行:

./install.sh

遇到第一个问题:

cmake: not found

安装cmake解决:
sudo apt-get install cmake

第二个问题:
readline.c:8:31: fatal error: readline/readline.h: 没有那个文件或目录

安装libreadine-dev解决:
sudo apt-get install libreadline-dev

第三个问题:安装过程依然提示“error -- unsupported GNU version! gcc versions later than 5 are not supported!”

ubuntu17.04自带gcc 6.x 版本,所以降级安装gcc 4.9版本解决问题:

sudo apt-get install g++-4.9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 20
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 20

安装完毕依然会提示:

Not updating your shell profile.
You might want to
add the following lines to your shell profile:

. /home/textminer/torch/torch/install/bin/torch-activate

在 ~/.profile 文件末尾加上这行 ". /home/textminer/torch/torch/install/bin/torch-activate " 并执行 source ~/.profile,然后输入 th试试。

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:从零开始搭建深度学习服务器: 深度学习工具安装(TensorFlow + PyTorch + Torch) http://www.52nlp.cn/?p=10008

从零开始搭建深度学习服务器: 基础环境配置(Ubuntu + GTX 1080 TI + CUDA + cuDNN)

Deep Learning Specialization on Coursera

去年上半年配置了一台GTX1080深度学习主机:深度学习主机攒机小记,然后分别写了两篇深度学习环境配置的文章:深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0深度学习主机环境配置: Ubuntu16.04+GeForce GTX 1080+TensorFlow,得到了很多同学留言,不过这个一年多以前完成的深度学习环境配置方案显得有些落伍了。这一年里,深度学习领域继续高歌猛进,包括 Andrew Ng 也离开百度出来创业了,他的第一个项目是deeplearning.ai,和Coursera合作推出了一个深度学习专项课程系列: Andrew Ng 深度学习课程小记。另外GTX1080的升级版1080TI显卡的发售也刺激了深度学习服务器的配置升级,我也机缘巧合的配置了3台1080TI深度学习服务器:从零开始搭建深度学习服务器:硬件选择。同时深度学习工具的开发迭代速度也惊人,Theano在完成了自己的历史使命后选择了停止更新,以这样的方式了退出了深度学习的舞台,而 TensorFlow,Torch,Pytorch 等工具和周边也发展迅猛。因为一次偶然事件,我又一次为老机器重装了系统环境,并且选则了最新的cuda9, cudnn7.0等基础工具版本: 深度学习服务器环境配置: Ubuntu17.04+Nvidia GTX 1080+CUDA 9.0+cuDNN 7.0+TensorFlow 1.3。不过回过头来,发现这种源代码方式编译 TensorFlow GPU 版本的方式在国内的网络环境下并不方便,而我更喜欢 CUDA8 + cuDNN6 + Tensorflow + Pytorch + Torch 的安装方案,简明扼要并且比较方便,于是在新的深度学习主机里我分别在Ubunu17.04和Ubuntu16.04的系统环境下配置了这样的深度学习服务器环境,下面就是相关的安装记录,希望这能成为一份简单的深度学习服务器环境配置指南。

1. 安装Ubuntu系统: Ubuntu16.04 或者 Ubuntu17.04

从Ubuntu官方直接下载Ubuntu镜像(Ubuntu16.04或者Ubuntu17.04,采用的是desktop amd64版本),用U盘和Ubuntu镜像制作安装盘。在MAC下制作 Ubuntu USB 安装盘的方法可参考: 在MAC下使用ISO制作Linux的安装USB盘,之后通过Bios引导U盘启动安装Ubuntu系统。如果安装的时候出现类似黑屏或者类似 "nouveau ... fifo ..."之类的报错信息,重启电脑,进入安装界面时候长按e,进入图形界面,按F6,选择 nomodeset 或者手动添加,进行Ubuntu系统的安装。参考《深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0》。

2. Source源和Pip源设置:

系统安装完毕后建议设置一下source源和pip源,这样可以加速安装相关的工具包。

cd /etc/apt/
sudo cp sources.list sources.list.bak
sudo vi sources.list

对于Ubuntu16.04,我用的是阿里云的源,把下面的这些源添加到source.list文件头部:

deb-src http://archive.ubuntu.com/ubuntu xenial main restricted #Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main restricted multiverse universe #Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted multiverse universe #Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse #Added by software-properties
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner
deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted multiverse universe #Added by software-properties
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security multiverse

对于Ubuntu17.04,我使用的是网易的源:

deb http://mirrors.163.com/ubuntu/ zesty main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-backports main restricted universe multiverse

最后更新一下:

sudo apt-get update
sudo apt-get upgrade

另外一个事情是将pip源指向阿里云的源镜像:http://mirrors.aliyun.com/help/pypi,具体添加一个 ~/.config/pip/pip.conf 文件,设置为:

[global]
trusted-host =  mirrors.aliyun.com
index-url = http://mirrors.aliyun.com/pypi/simple

或者清华的pip源,刚好安装的那两天清华的pip源抽风,所以就换阿里云的了。

3. 安装1080TI显卡驱动:

sudo apt-get purge nvidia*
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update && sudo apt-get install nvidia-384 nvidia-settings

安装完毕后重启机器,运行 nvidia-smi,看看生效的显卡驱动:

4. 安装CUDA:

因为Tensorflow和Pytorch目前官方提供的PIP版本只支持CUDA8, 所以我选择了安装CUDA8.0。不过目前英文达官方网站的 CUDA-TOOLKIT页面默认提供的是CUDA9.0的下载,所以需要在英文达官方提供的另一个 CUDA Toolkit Archive 页面选择CUDA8,这个页面包含了CUDA所有的历史版本和当前的CUDA9.0版本。点击 CUDA Toolkit 8.0 GA2 (Feb 2017) 这个页面,选择"cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb" 和 "cuBLAS Patch Update to CUDA 8":

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda

如果之前没有安装上述"cuBLAS Patch Update to CUDA 8",可以用如下方式安装更新:

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64.deb
sudo apt-get update  
sudo apt-get upgrade cuda

在 ~/.bashrc 中设置环境变量:

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda

运行 source ~/.bashrc 使其生效

4. 安装cuDNN:

cuDNN7.0 虽然出来了,但是 CUDA8 的最佳拍档依然是cuDNN6.0,在NIVIDA开发者官网上,找到cudnn的下载页面: https://developer.nvidia.com/rdp/cudnn-download ,选择"Download cuDNN v6.0 (April 27, 2017), for CUDA 8.0" 中的 "cuDNN v6.0 Library for Linux":

下载后安装非常简单,就是解压然后拷贝到相应的系统CUDA路径下,注意最后一行拷贝时 "-d"不能少, 否则会提示.so不是symbol link:

tar -zxvf cudnn-8.0-linux-x64-v6.0.tgz 
sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/ -d

以上是安装均在Ubunt16.04和Ubuntu17.04环境下测试通过,最后鉴于最近一些相关文章评论有同学留言无法从官方下载CUDA和cuDNN,亲测可能与国内环境有关,我将cuda8.0, cuda9.0, cudnn6.0, cudnn7.0的相关工具包上传到了百度网盘,提供两个下载地址:

CUDA8.0 & CUDA9.0下载地址:链接: https://pan.baidu.com/s/1gfaS4lt 密码 ddji ,包括:

1) CUDA8.0 for Ubuntu16.04: cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64.deb
2) CUDA8.0 for Ubuntu16.04 更新: cuda-repo-ubuntu1604-8-0-local-cublas-performance-update_8.0.61-1_amd64
3) CUDA9.0 for Ubuntu16.04: cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
4) CUDA9.0 for Ubuntu17.04: cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64

cuDNN6.0 & cuDNN7.0下载地址:链接: https://pan.baidu.com/s/1qXIZqpA 密码 cwch ,包括:

1) cudnn6.0 for CUDA8: cudnn-8.0-linux-x64-v6.0.tgz
2) cudnn7.0 for CUDA8: cudnn-8.0-linux-x64-v7.tgz
3) cudnn7.0 for CUDA9: cudnn-9.0-linux-x64-v7.tgz

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:从零开始搭建深度学习服务器: 基础环境配置(Ubuntu + GTX 1080 TI + CUDA + cuDNN) http://www.52nlp.cn/?p=9823

Andrew Ng 深度学习课程系列第四门课程卷积神经网络开课

Deep Learning Specialization on Coursera

Andrew Ng 深度学习课程系列第四门课程卷积神经网络(Convolutional Neural Networks)将于11月6日开课 ,不过课程资料已经放出,现在注册课程已经可以听课了 ,这门课程属于Coursera上的深度学习专项系列 ,这个系列有5门课,前三门已经开过好几轮,但是第4、第5门课程一直处于待定状态,新的一轮将于11月7号开始,感兴趣的同学可以关注:Deep Learning Specialization

This course will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images. You will: - Understand how to build a convolutional neural network, including recent variations such as residual networks. - Know how to apply convolutional networks to visual detection and recognition tasks. - Know to use neural style transfer to generate art. - Be able to apply these algorithms to a variety of image, video, and other 2D or 3D data. This is the fourth course of the Deep Learning Specialization.

个人认为这是目前互联网上最适合入门深度学习的课程系列了,Andrew Ng 老师善于讲课,另外用Python代码抽丝剥茧扣作业,课程学起来非常舒服,参考我之前写得两篇小结:

Andrew Ng 深度学习课程小记

Andrew Ng (吴恩达) 深度学习课程小结

额外推荐: 深度学习课程资源整理

从零开始搭建深度学习服务器:硬件选择

Deep Learning Specialization on Coursera

去年京东618前后,通过雷霆世纪搞了一台半组装机 “The one 2 Plus自由版” ,然后自己通过其他渠道搭配了技嘉1080显卡,64G内存和4T普通硬盘,攒了一个“深度学习主机”。然而不幸的是,这台机器在今年8月底9月初的时候频繁出现问题,机器运行环境是Ubuntu,机器运行过程中时不时死机,在尝试了各种解决方案无果后,我把这台机器送到了雷霆委托的海尔售后,然后就开始了漫长的等待。

不过我很快启动了预案,在google了一圈资料和仔细参考了之前的方案 "深度学习主机攒机小记" ,特别是文后几个同学的留言建议,我列出了一个如下的list,并且决定亲自动手来组装:

部件	型号	价格	链接	备注
CPU	Intel/英特尔 Xeon E5-1620V4 CPU 4核心8线程	1525	淘宝散片
散热器	九州风神(DEEPCOOL)大霜塔 CPU散热器	219	https://item.jd.com/689273.html
主板	技嘉(GIGABYTE)X99-UD4主板 (Intel X99/LGA2011-3)	1500	闲鱼二手保5年
内存	骇客神条16G * 2 DDR4 2400	2999	https://item.jd.com/2551254.html    
SSD	三星(SAMSUNG) 960 EVO 250G M.2 NVMe 固态硬盘	899	https://item.jd.com/3739097.html	
硬盘	希捷酷鱼 3TB 7200转64M SATA3 台式机机械硬盘	529	https://item.jd.com/3355984.html
电源	EVGA 额定1600w 1600 G2 电源 	2299	http://item.jd.com/3609960.html	
机箱	游戏帝国 GAMEMAX 轻风健侠 全塔分体式静音机箱 黑	279	https://item.jd.com/4142323.html
显卡    华硕战枭1080TI         6899               https://item.jd.com/4709294.html

这一次深度学习服务器的搭建,不再只考虑单显卡的配置,直接考虑了以后上4显卡的可扩展性,所以在CPU,主板和电源的选择上,都锁定在能最大支持4路显卡上。去年写得那篇深度学习主机攒机的文章留言中,winstar同学的留言给了我很大的启示:

和楼主同样的追求,不同的道路。折腾两月,机器配置如下:
cpu: Xeon E5 v1620 v4 14nm 3.5GHz 4核8线程
内存:海盗船8G*2 2400MHz
显卡:华硕GTX1070 公版(为了将来做SLI,无论兼容性还是PCIe插槽占用,都是公版最合适,so最终回到公版)
主板:华硕X99EWS (真正支持4路SLI的超强主板,新货很贵,闲鱼入的2手)
电源:海韵X1250 1200W (也是2手货,目前用起来看没啥问题)
硬盘:240G intel SSD + 希捷2T 混合硬盘 + 120G 三星老硬盘(以前老笔记本淘汰下来的,本来做移动硬盘用的,现在重新上PC跑ubuntu)
机箱:比较看中散热+静音,同时又必须支持EATX主板+8个PCIe槽口,最终选定Tt F51 静音版(很超值)
显示器:2手收了台dell u2515h,2k屏

CPU参考了他的方案:40Lane 最新版cpu 性价比最高的是 Xeon E5 v1620 v4, 最新技术14nm,3.5GHZ ,不过这一款貌似国内买不到全新的盒装,最终我在淘宝上1500多入手了一个散片(应该是所谓的洋垃圾),商家保一年。主板的选择比较纠结,如果不差钱,直接上 华硕X99 E WS,另外一个就是文章一开始引用的《如何搭建一台深度学习服务器》中建议的 技嘉X99-UD4主板,这两个主板都支持4路显卡,不过前者更强大,当然价格更贵。我也尝试在闲鱼中搜索,刚好有个本地的同学出售今年618期间买的技嘉X99-UD4主板,箱说全,发票在,另外他买后也在技嘉官网上注册过,查了一下质保,还有接近5年的质保期,所以没有犹豫,1500买了这个所谓的二手主板。另外在电源上我纠结了一下,最终参考了灵魂机器同学文章中的方案:EVGA 额定1600w 1600 G2 电源 , 据说“Nvidia DevBox 用的是 EVGA 1600W 80+ Gold 120-G2-1600-X1 ,那我也用它吧”。另外华硕战枭1080TI显卡是后来补的,刚好国庆期间10月2号京东上有个华硕品牌日,6899购得此显卡,现在已经要7699了。。。前期用我的老1080显卡替代。

除了CPU和主板,其他都采购自京东,东西备齐后,当天晚上下班吃完饭后开始组装,直到晚上凌晨3点点亮机器,整个过程搞得我很疲惫,但是也很兴奋。之前对于电脑的组装能到达的程度是内存、显卡、硬盘拆解,但是没有亲自动过主板、CPU、散热器。这一次全新的尝试格外小心,特别是2个核心部件CPU、主板还是二手,所以第一次组装电脑有如履薄冰的感觉。安装前查资料,看网上的装机视频,装机过程遇到某个部件的安装不太清楚时,继续查资料,看视频,这个过程中比较耗时的是散热器的安装,主板的插线,另外这个机箱电源位置和硬盘位置挨在一起,买来的电源的线条又特别粗,所以布线有点麻烦。装完第一刻开机发现机器不断重启,心里拔凉,然后仔细检查,发现两根内存要插在主板对应的位置,重新插拔内存,再次开机,终于点亮,那一刻还是相当有成就感的。

这个是目前家用的一套深度学习服务器解决方案,不过虽然说预留了4显卡的空间,但是发现主板显卡之间的距离还是有点小,想要上4卡的话最好用公版显卡,这个看看以后有没有机会尝试。

这段时间,通过一个靠谱渠道为公司配置了两套深度学习服务器,主要参考了一个朋友的双卡方案和知乎上一个四卡方案,直接上配置,仅供参考:

紧凑型双卡方案:

部件	型号	价格	链接	备注
CPU	英特尔(Intel) i7 7700K 酷睿四核 盒装CPU处理器 	2499	https://item.jd.com/4132882.html	
散热器	美商海盗船 H55 水冷	449	https://item.jd.com/10850633518.html	
主板	华硕(ASUS)ROG STRIX Z270H GAMING 主板(Intel Z270/LGA 1151) 	1599	https://item.jd.com/3778183.html	
内存	美商海盗船(USCORSAIR) 复仇者LPX DDR4 3000 32GB(16Gx2条)  	3499	https://item.jd.com/1990572.html	
SSD	东芝(TOSHIBA) Q200系列 240GB SATA3 固态硬盘	679	https://item.jd.com/1592448.html
硬盘	希捷酷鱼 3TB 7200转64M SATA3 台式机机械硬盘	529	https://item.jd.com/3355984.html	
电源	美商海盗船(USCorsair)额定1000W RM1000x 电源	1279	https://item.jd.com/1905101.html	
机箱	先马(SAMA)巨魔合金版 中塔电竞游戏机箱 	299	https://item.jd.com/4434569.html	
显卡	微星 MSI GTX 1080 Ti GAMING X 11GB 352BIT GDDR5X PCI-E 3.0 显卡 * 2	13000    http://item.jd.com/4742314.html

豪华型四卡方案(目前上双卡):

部件	型号	价格	链接	备注
CPU	英特尔(Intel)酷睿六核i7-6850K 盒装CPU处理器 	4599	http://item.jd.com/11814000696.html	
散热器	美商海盗船 H55 水冷	449	https://item.jd.com/10850633518.html	
主板	华硕(ASUS)华硕 X99-E WS/USB 3.1工作站主板	4759	
内存	美商海盗船(USCORSAIR) 复仇者LPX DDR4 3000 32GB(16Gx2条)  	3499	https://item.jd.com/1990572.html	
SSD	三星(SAMSUNG) 960 EVO 250G M.2 NVMe 固态硬盘	899	https://item.jd.com/3739097.html		
硬盘	希捷(SEAGATE)酷鱼系列 4TB 5900转 台式机机械硬盘 	899	https://item.jd.com/4220257.html	
电源	美商海盗船 AX1500i 全模组电源 80Plus金牌	3449	http://item.jd.com/1124855.html
机箱	美商海盗船 AIR540 USB3.0 	949	http://item.jd.com/12173900062.html
显卡	微星 MSI GTX 1080 Ti GAMING X 11GB 352BIT GDDR5X PCI-E 3.0 显卡 * 2	13000    http://item.jd.com/4742314.html

实际使用中,依然发现即使用华硕X99-E WS主板,上4卡的话显卡最好还是公版显卡。最后不得不感慨一下,这一年内存涨价太快了,去年618买了4条 金士顿骇客Fury DDR4 2400 16G 内存花了不到2000块,现在需要接近7000块,发财的感觉。。。

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:从零开始搭建深度学习服务器:硬件选择 http://www.52nlp.cn/?p=9821

Andrew Ng (吴恩达) 深度学习课程小结

Deep Learning Specialization on Coursera

Andrew Ng (吴恩达) 深度学习课程从宣布到现在大概有一个月了,我也在第一时间加入了这个Coursera上的深度学习系列课程,并且在完成第一门课“Neural Networks and Deep Learning(神经网络与深度学习)”的同时写了关于这门课程的一个小结:Andrew Ng 深度学习课程小记。之后我断断续续的完成了第二门深度学习课程“Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization"和第三门深度学习课程“Structuring Machine Learning Projects”的相关视频学习和作业练习,也拿到了课程证书。平心而论,对于一个有经验的工程师来说,这门课程的难度并不高,如果有时间,完全可以在一个周内完成三门课程的相关学习工作。但是对于一个完全没有相关经验但是想入门深度学习的同学来说,可以预先补习一下Python机器学习的相关知识,如果时间允许,建议先修一下 CourseraPython系列课程Python for Everybody Specialization 和 Andrew Ng 本人的 机器学习课程

吴恩达这个深度学习系列课 (Deep Learning Specialization) 有5门子课程,截止目前,第四门"Convolutional Neural Networks" 和第五门"Sequence Models"还没有放出,不过上周四 Coursera 发了一封邮件给学习这门课程的用户:

Dear Learners,

We hope that you are enjoying Structuring Machine Learning Projects and your experience in the Deep Learning Specialization so far!

As we are nearing the one month anniversary of the Deep Learning Specialization, we wanted to thank you for your feedback on the courses thus far, and communicate our timelines for when the next courses of the Specialization will be available.

We plan to begin the first session of Course 4, Convolutional Neural Networks, in early October, with Course 5, Sequence Models, following soon after. We hope these estimated course launch timelines will help you manage your subscription as appropriate.

If you’d like to maintain full access to current course materials on Coursera’s platform for Courses 1-3, you should keep your subscription active. Note that if you only would like to access your Jupyter Notebooks, you can save these locally. If you do not need to access these materials on platform, you can cancel your subscription and restart your subscription later, when the new courses are ready. All of your course progress in the Specialization will be saved, regardless of your decision.

Thank you for your patience as we work on creating a great learning experience for this Specialization. We look forward to sharing this content with you in the coming weeks!

Happy Learning,

Coursera

大意是第四门深度学习课程 CNN(卷积神经网络)将于10月上旬推出,第五门深度学习课程 Sequence Models(序列模型, RNN等)将紧随其后。对于付费订阅的用户,如果你想随时随地获取当前3门深度学习课程的所有资料,最好保持订阅;如果你仅仅想访问 Jupyter Notebooks,也就是获取相关的编程作业,可以先本地保存它们。你也可以现在取消订阅这门课程,直到之后的课程开始后重新订阅,你的所有学习资料将会保存。所以一个比较省钱的办法,就是现在先离线保存相关课程资料,特别是编程作业等,然后取消订阅。当然对于视频,也可以离线下载,不过现在免费访问这门课程的视频有很多办法,譬如Coursera本身的非订阅模式观看视频,或者网易云课堂免费提供了这门课程的视频部分。不过我依然觉得,吴恩达这门深度学习课程,如果仅仅观看视频,最大的功效不过30%,这门课程的精华就在它的练习和编程作业部分,特别是编程作业,非常值得揣摩,花钱很值。

再次回到 Andrew Ng 这门深度学习课程的子课程上,第二门课程是“Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization",有三周课程,包括是深度神经网络的调参、正则化方法和优化算法讲解:

第一周课程是关于深度学习的实践方面的经验 (Practical aspects of Deep Learning), 包括训练集/验证集/测试集的划分,Bias 和
Variance的问题,神经网络中解决过拟合 (Overfitting) 的 Regularization 和 Dropout 方法,以及Gradient Check等:


这周课程依然强大在编程作业上,有三个编程作业需要完成:

完成编程的作业的过程也是一个很好的回顾课程视频的过程,可以把一些听课中容易忽略的点补上。

第二周深度学习课程是关于神经网络中用到的优化算法 (Optimization algorithms),包括 Mini-batch gradient descent,RMSprop, Adam等优化算法:

编程作业也很棒,在老师循循善诱的预设代码下一步一步完成了几个优化算法。

第三周深度学习课程主要关于神经网络中的超参数调优和深度学习框架问题(Hyperparameter tuning , Batch Normalization and Programming Frameworks),顺带讲了一下多分类问题和 Softmax regression, 特别是最后一个视频简单介绍了一下 TensorFlow , 并且编程作业也是和TensorFlow相关,对于还没有学习过Tensorflow的同学,刚好是一个入门学习机会,视频介绍和作业设计都很棒:


第三门深度学习课程Structuring Machine Learning Projects”更简单一些,只有两周课程,只有 Quiz, 没有编程作业,算是Andrew Ng 老师关于深度学习或者机器学习项目方法论的一个总结:

第一周课程主要关于机器学习的策略、项目目标(可量化)、训练集/开发集/测试集的数据分布、和人工评测指标对比等:


课程虽然没有提供编程作业,但是Quiz练习是一个关于城市鸟类识别的机器学习案例研究,通过这个案例串联15个问题,对应着课程视频中的相关经验,值得玩味。

第二周课程的学习目标是:

“Understand what multi-task learning and transfer learning are
Recognize bias, variance and data-mismatch by looking at the performances of your algorithm on train/dev/test sets”

主要讲解了错误分析(Error Analysis), 不匹配训练数据和开发/测试集数据的处理(Mismatched training and dev/test set),机器学习中的迁移学习(Transfer learning)和多任务学习(Multi-task learning),以及端到端深度学习(End-to-end deep learning):

这周课程的选择题作业仍然是一个案例研究,关于无人驾驶的:Autonomous driving (case study),还是用15个问题串起视频中得知识点,体验依然很棒。

最后,关于Andrew Ng (吴恩达) 深度学习课程系列,Coursera上又启动了新一轮课程周期,9月12号开课,对于错过了上一轮学习的同学,现在加入新的一轮课程刚刚好。不过相信 Andrew Ng 深度学习课程会成为他机器学习课程之后 Coursera 上又一个王牌课程,会不断滚动推出的,所以任何时候加入都不会晚。另外,如果已经加入了这门深度学习课程,建议在学习的过程中即使保存资料,我都是一边学习一边保存这门深度学习课程的相关资料的,包括下载了课程视频用于离线观察,完成Quiz和编程作业之后都会保存一份到电脑上,方便随时查看。

索引:Andrew Ng 深度学习课程小记

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:Andrew Ng (吴恩达) 深度学习课程小结 http://www.52nlp.cn/?p=9761

深度学习服务器环境配置: Ubuntu17.04+Nvidia GTX 1080+CUDA 9.0+cuDNN 7.0+TensorFlow 1.3

Deep Learning Specialization on Coursera

一年前,我配置了一套“深度学习服务器”,并且写过两篇关于深度学习服务器环境配置的文章:《深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0》 和 《深度学习主机环境配置: Ubuntu16.04+GeForce GTX 1080+TensorFlow》 , 获得了很多关注和引用。 这一年来,深度学习的大潮继续,特别是前段时间,吴恩达(Andrew Ng)在Coursera上推出了深度学习系列课程,这门面向初学者的深度学习课程,更是进一步的将深度学习的门槛降低。

前段时间这台主机出了点问题,本着“不折腾毋宁死”的原则,我重新安装了系统,并且选择了最新的Ubuntu17.04,CUDA9.0,cuDNN7.0, TensorFlow1.3,然后又是一堆坑,另外所能Google到的国内外资料目前为止基本上覆盖的还是CUDA8.0, 和cuDNN6.0, 5.0, 所以这里再次记录一下本次深度学习主机环境配置之旅。

1. 准备工作

Ubuntu17.04系统安装完毕之后,首先做两个准备工作,一个是更新apt-get的源,这次用的是网易的源:

deb http://mirrors.163.com/ubuntu/ zesty main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ zesty-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ zesty-backports main restricted universe multiverse

另外一个事情是将pip源指向清华大学的源镜像:https://mirrors.tuna.tsinghua.edu.cn/help/pypi/,具体添加一个 ~/.config/pip/pip.conf 文件,设置为:

[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple

这两件事情都可以加速安装相关工具包的速度,事半功倍。

然后就是给GTX1080显卡安装驱动,参考了这篇文章《How to install Nvidia Drivers on Ubuntu 17.04 & below, Linux Mint》,并且选择了这篇文章所指的最新的381.09驱动:


sudo apt-get purge nvidia*
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update && sudo apt-get install nvidia-381 nvidia-settings

安装完毕后重启电脑即可,运行nvidia-smi即可检验驱动是否安装成功。不过之后在安装CUDA9的时候,又被安利了一次384.69显卡驱动,所以我不太清楚这个过程是否有必要。

2. 安装CUDA TOOLKIT

依然前往NVIDIA的CUDA官方页面,登录后可以选择CUDA9.0版本下载:CUDA Toolkit 9.0 Release Candidate Downloads, 这次我选择的是面向ubuntu17.04的deb版本:

下载完deb文件之后按照官方给的方法按如下方式安装CUDA9:

sudo dpkg -i cuda-repo-ubuntu1704-9-0-local-rc_9.0.103-1_amd64.deb
sudo apt-key add /var/cuda-repo-9-0-local-rc/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

安装过程中发现貌似又一次安装了显卡驱动,版本是384.69,安装完毕后运行“nvidia-smi”提示错误:Failed to initialize NVML: Driver/library version mismatch,这个时候是需要重启机器让新的版本的显卡驱动生效,再次运行“nvidia-smi”:

之后可以测试一下CUDA的相关例子,我将cuda9.0下的sample拷贝到一个临时目录下进行编译:


cp -r /usr/local/cuda-9.0/samples/ .
cd samples/
make

然后运行几个例子看一下:

textminer@textminer:~/cuda_sample/samples/1_Utilities/bandwidthTest$ ./bandwidthTest

[CUDA Bandwidth Test] - Starting...
Running on...

Device 0: GeForce GTX 1080
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 11258.6

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12875.1

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 231174.2

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

textminer@textminer:~/cuda_sample/samples/6_Advanced/c++11_cuda$ ./c++11_cuda

GPU Device 0: "GeForce GTX 1080" with compute capability 6.1

Read 3223503 byte corpus from ./warandpeace.txt
counted 107310 instances of 'x', 'y', 'z', or 'w' in "./warandpeace.txt"

最后在 ~/.bashrc 里再设置一下cuda的环境变量:

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda

同时 source ~/.bashrc 让其生效。

3. 安装cuDNN

安装cuDNN很简单,不过同样需要前往NVIDIA官网:https://developer.nvidia.com/cudnn,这次我们选择的是cuDNN7, 关于cuDNN7,NVIDIA官方主页是这样写的:

What’s New in cuDNN 7?
Deep learning frameworks using cuDNN 7 can leverage new features and performance of the Volta architecture to deliver up to 3x faster training performance compared to Pascal GPUs. cuDNN 7 is now available as a free download to the members of the NVIDIA Developer Program. Highlights include:

Up to 2.5x faster training of ResNet50 and 3x faster training of NMT language translation LSTM RNNs on Tesla V100 vs. Tesla P100
Accelerated convolutions using mixed-precision Tensor Cores operations on Volta GPUs
Grouped Convolutions for models such as ResNeXt and Xception and CTC (Connectionist Temporal Classification) loss layer for temporal classification

我选择的是这个版本:cuDNN v7.0 (August 3, 2017), for CUDA 9.0 RC --- cuDNN v7.0 Library for Linux

下载完毕后解压,然后将相关文件拷贝到cuda安装目录下即可:

tar -zxvf cudnn-9.0-linux-x64-v7.tgz
sudo cp cuda/include/cudnn.h /usr/local/cuda/include/
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64/ -d (
sudo chmod a+r /usr/local/cuda/include/cudnn.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

4. 安装Tensorflow1.3

在安装Tensorflow之前,按照Tensorflow官方安装文档的说明,先安装一个libcupti-dev库:

The libcupti-dev library, which is the NVIDIA CUDA Profile Tools Interface. This library provides advanced profiling support. To install this library, issue the following command:

$ sudo apt-get install libcupti-dev

然后通过virtualenv 的方式安装Tensorflow1.3 GUP版本,注意我用的是Python2.7:

sudo apt-get install python-pip python-dev python-virtualenv
virtualenv --system-site-packages tensorflow1.3
source tensorflow1.3/bin/activate
(tensorflow1.3) textminer@textminer:~/tensorflow/tensorflow1.3$ pip install --upgrade tensorflow-gpu

通过清华的pip源,用这种方式安装tensorflow-gpu版本速度很快:

Collecting tensorflow-gpu
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ca/c4/e39443dcdb80631a86c265fb07317e2c7ea5defe73cb531b7cd94692f8f5/tensorflow_gpu-1.3.0-cp27-cp27mu-manylinux1_x86_64.whl (158.8MB)
21% |███████ | 34.7MB 958kB/s eta 0:02:10

Successfully built markdown html5lib
Installing collected packages: backports.weakref, protobuf, funcsigs, pbr, mock, numpy, markdown, html5lib, bleach, werkzeug, tensorflow-tensorboard, tensorflow-gpu
Successfully installed backports.weakref-1.0rc1 bleach-1.5.0 funcsigs-1.0.2 html5lib-0.9999999 markdown-2.6.9 mock-2.0.0 numpy-1.13.1 pbr-3.1.1 protobuf-3.4.0 tensorflow-gpu-1.3.0 tensorflow-tensorboard-0.1.5 werkzeug-0.12.2

这种方式安装TensorFlow很方便,并且切换tensorflow的版本也很容易,如果不是下面的坑,这是我安装Tensorflow的第一选择。然后尝试运行一下tensorflow,满心期待会出现顺利导入并且有GPU的相关信息出现:

(tensorflow1.3) textminer@textminer:~/tensorflow/tensorflow1.3$ python
Python 2.7.13 (default, Jan 19 2017, 14:48:08)
[GCC 6.3.0 20170118] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf

可是却报如下错误:

File "/home/textminer/tensorflow/tensorflow1.3/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

我看了一下 /usr/local/cuda/lib64/ 下有 libcusolver.so.9.0 这个文件,同时google了一下相关信息,基本上确定这是由于Tensorflow官方版本目前不支持CUDA9, 支撑CUDA8的缘故,所以这个pip版本默认找得是CUDA8.0的后缀文件: libcusolver.so.8.0 。

好在天无绝人之路,虽然这方面的资料很少,还是通过google找到了github上tensorflow的最近的两条issue: Upgrade to CuDNN 7 and CUDA 9 CUDA 9RC + cuDNN7 。前一条是请求TensorFlow官方版本支持CUDA9和cuDNN7的讨论:Please upgrade TensorFlow to support CUDA 9 and CuDNN 7. Nvidia claims this will provide a 2x performance boost on Pascal GPUs. 后一条是一个非官方方式在Tensorflow中支持CUDA9和cuDNN7的源代码安装方案:This is an unofficial and very not supported patch to make it possible to compile TensorFlow with CUDA9RC and cuDNN 7 or CUDA8 + cuDNN 7.

又是源代码安装Tensorflow, 这个方式我是不推荐的,还记得去年夏天用源代码安装Tensorflow的种种痛苦,特别是国内网络不便的情况下,这种方式更是不愿意推荐,不过不得已,我必须试一下。特别声明,如果之后Tensorflow官方版本已经支持CUDA9和cuDNN7了,请直接按上述pip方式安装,以下可以忽略。

5. 源代码方式安装Tensorflow

平心而论,严格按照github上这个10天前的issue的方法做基本上是没问题的:

git clone https://github.com/tensorflow/tensorflow.git
wget https://storage.googleapis.com/tf-performance/public/cuda9rc_patch/0001-CUDA-9.0-and-cuDNN-7.0-support.patch
wget https://storage.googleapis.com/tf-performance/public/cuda9rc_patch/eigen.f3a22f35b044.cuda9.diff
cd tensorflow/
git status
git checkout db596594b5653b43fcb558a4753b39904bb62cbd~
git apply ../0001-CUDA-9.0-and-cuDNN-7.0-support.patch
./configure
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

但是我还是遇到了一点问题在configure之后用bazel编译tensorflow的时候遇到了如下错误:

ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '@local_config_cuda//cuda

google了一下之后发现我用的是最新版的bazel_0.5.4, 回退版本是个解决方案,所以回退到了bazel_0.5.2,问题解决。这里特别备注一下configure过程的选择,仅供参考:

Please specify the location of python. [Default is /usr/bin/python]:
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [y/N]: N
No Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [y/N]: N
No Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]:
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]:
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL support? [y/N]:
No OpenCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 8.0]: 9.0
Please specify the location where CUDA 9.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
"Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 6.0]: 7
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1]
Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]:
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Add "--config=mkl" to your bazel command to build with MKL support.
Please note that MKL on MacOS or windows is still not supported.
If you would like to use a local MKL instead of downloading, please set the environment variable "TF_MKL_ROOT" every time before build.
Configuration finished

即使bazel版本正确和configure无误,第一次用bazel编译 Tensorflow 还是会遇到问题:

bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

不过这个是上述issue中专门提到的,并且给了一个Eigen patch解决方案:

Attempt to build TensorFlow, so that Eigen is downloaded. This build will fail if building for CUDA9RC but will succeed for CUDA8
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

Apply the Eigen patch:

    cd -P bazel-out/../../../external/eigen_archive
    patch -p1 < ~/Downloads/eigen.f3a22f35b044.cuda9.diff

Build TensorFlow successfully
    cd -
    bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

再次编译Tensorflow成功,最后编译tensorflow的pip安装文件:

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
ls /tmp/tensorflow_pkg/
tensorflow-1.3.0rc1-cp27-cp27mu-linux_x86_64.whl
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.3.0rc1-cp27-cp27mu-linux_x86_64.whl

我们在ipython中试一下新安装好的Tensorflow:

Python 2.7.13 (default, Jan 19 2017, 14:48:08) 
Type "copyright", "credits" or "license" for more information.
 
IPython 5.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.
 
In [1]: import tensorflow as tf
 
In [2]: hello = tf.constant('Hello, Tensorflow')
 
In [3]: sess = tf.Session()
2017-09-01 13:32:08.828776: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: 
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.835
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 7.62GiB
2017-09-01 13:32:08.828808: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 
2017-09-01 13:32:08.828813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0:   Y 
2017-09-01 13:32:08.828823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
 
In [4]: print(sess.run(hello))
Hello, Tensorflow

终于看到GPU的相关信息了,接下来,尽情享受Tensorflow GPU版本带来的效率提升吧。

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:深度学习服务器环境配置: Ubuntu17.04+Nvidia GTX 1080+CUDA 9.0+cuDNN 7.0+TensorFlow 1.3 http://www.52nlp.cn/?p=9704

Andrew Ng 深度学习课程小记

Deep Learning Specialization on Coursera

2011年秋季,Andrew Ng 推出了面向入门者的MOOC雏形课程机器学习: Machine Learning,随后在2012年4月,Andrew Ng 在Coursera上推出了改进版的Machine Learning(机器学习)公开课: Andrew Ng' Machine Learning: Master the Fundamentals,这也同时宣告了Coursera平台的诞生。当时我也是第一时间加入了这门课程,并为这门课程写了一些笔记:Coursera公开课笔记: 斯坦福大学机器学习 。同时也是受这股MOOC浪潮的驱使,建立了“课程图谱”,因此结识了不少公开课爱好者和MOOC大神。而在此之前,Andrew Ng 在斯坦福大学的授课视频“机器学习”也流传甚广,但是这门面向斯坦福大学学生的课程难道相对较高。直到2012年Coursera, Udacity等MOOC平台的建立,把课程视频,作业交互,编程练习有机结合在一起,才产生了更有生命力的MOOC课程。Andrew Ng 在为新课程深度学习写的宣传文章“deeplearning.ai: Announcing new Deep Learning courses on Coursera”里提到,这门机器学习课程自从开办以来,大约有180多万学生学习过,这是一个惊人的数字。

回到这个深度学习系列课:Deep Learning Specialization ,该课程正式开课是8月15号,但是在此之前几天已经开放了,加入后可以免费学习7天,之后开始按月费49美元收取,直到取消这个系列的订阅为止。正式加入的好处是,除了课程视频,还可以在Coursera平台上做题和提交编程作业,得到实时反馈,如果通过的话,还可以拿到相应的课程证书。我在上周六加入了这门以 deeplearning.ai 的名义推出的Deep Learning(深度学习)系列课,并且利用业余时间完成了第一门课“Neural Networks and Deep Learning(神经网络与深度学习)”的相关课程,包括视频观看和交互练习以及编程作业,体验很不错。自从Coursera迁移到新平台后,已经很久没有上过相关的公开课了,这次要不是Andrew Ng 离开百度后重现MOOC江湖,点燃了内心久违的MOOC情节,我大概也不会这么认真的去上公开课了。

具体到该深度学习课程的组织上,Andrew Ng 把这门课程的门槛已经降到很低,和他的机器学习课程类似,这是一个面向AI初学者的深度学习系列课程

If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.

AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.

We will help you master Deep Learning, understand how to apply it, and build a career in AI.

虽然面向初学者,但是这门课程也会讲解很多实践中的工程经验,所以这门课程既适合没有经验的同学从基础学起,也适合有一定基础的同学查遗补漏:

从实际听课的效果上来看,如果用一个字来总结效果,那就是“值”,花钱也值。该系列第一门课是“Neural Networks and Deep Learning(神经网络与深度学习)” 分为4个部分:

1. Introduction to deep learning
2. Neural Networks Basics
3. Shallow neural networks
4. Deep Neural Networks

第一周关于“深度学习的介绍”非常简单,也没有编程作业,只有简单的选择题练习,主要是关于深度学习的宏观介绍和课程的相关介绍:

第二周关于“神经网络基础”从二分类讲起,到逻辑回归,再到梯度下降,再到用计算图(computation graph )求导,如果之前学过Andrew Ng的“Machine Learning(机器学习)” 公开课,除了Computation Graph, 其他应该都不会陌生:

第二周课程同时也提供了编程作业所需要的基础部分视频课程:Python and Vectorization。这门课程的编程作业使用Python语言,并且提供线上 Jupyter Notebook 编程环境完成作业,无需线下编程验证提交,非常方便。这也和之前机器学习课程的编程作业有了很大区别,之前那门课程使用Octave语言(类似Matlab的GNU Octave),并且是线下编程测试后提交给服务器验证。这次课程线上完成编程作业的感觉是非常棒的,这个稍后再说。另外就是强调数据处理时的 Vectorization(向量化/矢量化),并且重度使用 Numpy 工具包, 如果没有特别提示,请尽量避免使用 "for loop":

当然,这部分最赞的是编程作业的设计了,首先提供了一个热身可选的编程作业:Python Basics with numpy (optional),然后是本部分的相关作业:Logistic Regression with a Neural Network mindset。每部分先有一个引导将这部分的目标讲清楚,然后点击“Open Notebook”开始作业,Notebook中很多相关代码老师已经精心设置好,对于学生来说,只需要在相应提示的部分写上几行关键代码(主要还是Vectorization),运行后有相应的output,如果output和里面提示的期望输出一致的话,就可以点击保存继续下一题了,非常方便,完成作业后就可以提交了,这部分难度不大:

第三周课程关于“浅层神经网络”的课程我最关心的其实是关于反向传播算法的讲解,不过在课程视频中这个列为了可选项,并且实话实话Andrew Ng关于这部分的讲解并不能让我满意,所以如果看完这一部分后对于反向传播算法还不是很清楚的话,可以脑补一下《反向传播算法入门资源索引》中提到的相关文章。不过瑕不掩瑜,老师关于其他部分的讲解依然很棒,包括激活函数的选择,为什么需要一个非线性的激活函数以及神经网络中的初始化参数选择等问题:

虽然视频中留有遗憾,但是编程作业堪称完美,在Python Notebook中老师用代入模式系统的过了一遍神经网络中的基本概念,堪称“手把手教你用Python写一个神经网络”的经典案例:

update: 这个周六(2017.08.20)完成了第四周课程和相关作业,也达到了拿证书的要求,不过需要上传相关证件验证ID,暂时还没有操作。下面是关于第四周课程的一点补充。

第四周课程关于“深度神经网络(Deep Neural Networks)”,主要是多层神经网络的相关概念,有了第三周课程基础,第四周课程视频相对来说比较轻松:

不过本周课程的提供了两个编程作业,一个是一步一步完成深度神经网络,一个是深度神经网络的应用,依然很棒:

完成最后的编程作业就可以拿到相应的分数和可有获得课程证书了,不过获得证书前需要上传自己的相关证书完成相关身份验证,这个步骤我还没有操作,所以是等待状态:

这是我学完Andrew Ng这个深度学习系列课程第一门课程“Neural Networks and Deep Learning(神经网络与深度学习)” 的体验,如果用几个字来总结这个深度学习系列课程,依然是:值、很值、非常值。如果你是完全的人工智能的门外汉或者入门者,那么建议你先修一下Andrew Ng的 Machine Learning(机器学习)公开课 ,用来过渡和理解相关概念,当然这个是可选项;如果你是一个业内的从业者或者深度学习工具的使用者,那么这门课程很适合给你扫清很多迷雾;当然,如果你对机器学习和深度学习了如指掌,完全可以对这门课程一笑了之。

关于是否付费学习这门深度学习课程,个人觉得很值,相对于国内各色收费的人工智能课程,这门课程49美元的月费绝对物超所值,只要你有时间,你完全可以一个月学完所有课程。 特别是其提供的作业练习平台,在尝试了几个周的编程作业后,我已经迫不及待的想进入到其他周课程和编程作业了。

最后再次附上这门课程的链接,正如这门课程的目标所示:掌握深度学习、拥抱AI,现在就加入吧:Deep Learning Specialization: Master Deep Learning, and Break into AI

如何学习自然语言处理:一本书和一门课

Deep Learning Specialization on Coursera

关于“如何学习自然语言处理”,有很多同学通过不同的途径留过言,这方面虽然很早之前写过几篇小文章:《如何学习自然语言处理》和《几本自然语言处理入门书》,但是更推崇知乎上这个问答:自然语言处理怎么最快入门,里面有微软亚洲研究院周明老师的系统回答和清华大学刘知远老师的倾情奉献:初学者如何查阅自然语言处理(NLP)领域学术资料,当然还包括其他同学的无私分享。

不过,对于希望入门NLP的同学来说,推荐你们先看一下这本书: Speech and Language Processing,第一版中文名译为《自然语言处理综论》,作者都是NLP领域的大大牛:斯坦福大学 Dan Jurafsky 教授和科罗拉多大学的 James H. Martin 教授。这也是我当年的入门书,我读过这本书的中文版(翻译自第一版英文版)和英文版第二版,该书第三版正在撰写中,作者已经完成了不少章节的撰写,所完成的章节均可下载:Speech and Language Processing (3rd ed. draft)。从章节来看,第三版增加了不少和NLP相关的深度学习的章节,内容和篇幅相对于之前有了更多的更新:

Chapter Slides Relation to 2nd ed.
1: Introduction [Ch. 1 in 2nd ed.]
2: Regular Expressions, Text Normalization, and Edit Distance Text [pptx] [pdf]
Edit Distance [pptx] [pdf]
[Ch. 2 and parts of Ch. 3 in 2nd ed.]
3: Finite State Transducers
4: Language Modeling with N-Grams LM [pptx] [pdf] [Ch. 4 in 2nd ed.]
5: Spelling Correction and the Noisy Channel Spelling [pptx] [pdf] [expanded from pieces in Ch. 5 in 2nd ed.]
6: Naive Bayes Classification and Sentiment NB [pptx] [pdf]
Sentiment [pptx] [pdf]
[new in this edition]
7: Logistic Regression
8: Neural Nets and Neural Language Models
9: Hidden Markov Models [Ch. 6 in 2nd ed.]
10: Part-of-Speech Tagging [Ch. 5 in 2nd ed.]
11: Formal Grammars of English [Ch. 12 in 2nd ed.]
12: Syntactic Parsing [Ch. 13 in 2nd ed.]
13: Statistical Parsing
14: Dependency Parsing [new in this edition]
15: Vector Semantics Vector [pptx] [pdf] [expanded from parts of Ch. 19 and 20 in 2nd ed.]
16: Semantics with Dense Vectors Dense Vector [pptx] [pdf] [new in this edition]
17: Computing with Word Senses: WSD and WordNet Intro, Sim [pptx] [pdf]
WSD [pptx] [pdf]
[expanded from parts of Ch. 19 and 20 in 2nd ed.]
18: Lexicons for Sentiment and Affect Extraction SentLex [pptx] [pdf] [new in this edition]
19: The Representation of Sentence Meaning
20: Computational Semantics
21: Information Extraction [Ch. 22 in 2nd ed.]
22: Semantic Role Labeling and Argument Structure SRL [pptx] [pdf]
Select [pptx] [pdf]
[expanded from parts of Ch. 19 and 20 in 2nd ed.]
23: Neural Models of Sentence Meaning (RNN, LSTM, CNN, etc.)
24: Coreference Resolution and Entity Linking
25: Discourse Coherence
26: Seq2seq Models and Summarization
27: Machine Translation
28: Question Answering
29: Conversational Agents
30: Speech Recognition
31: Speech Synthesis

另外该书作者之一斯坦福大学 Dan Jurafsky 教授曾经在Coursera上开设过一门自然语言处理课程:Natural Language Processing,该课程目前貌似在Coursera新课程平台上已经查询不到,不过我们在百度网盘上做了一个备份,包括该课程视频和该书的第二版英文,两个一起看,效果更佳:

链接: https://pan.baidu.com/s/1kUCrV8r 密码: jghn 。

对于一直寻找如何入门自然语言处理的同学来说,先把这本书和这套课程拿下来才是一个必要条件,万事先有个基础。

同时欢迎大家关注我们的公众号:NLPJob,回复"slp"获取该书和课程最新资源。

自然语言处理工具包spaCy介绍

Deep Learning Specialization on Coursera

spaCy 是一个Python自然语言处理工具包,诞生于2014年年中,号称“Industrial-Strength Natural Language Processing in Python”,是具有工业级强度的Python NLP工具包。spaCy里大量使用了 Cython 来提高相关模块的性能,这个区别于学术性质更浓的Python NLTK,因此具有了业界应用的实际价值。

安装和编译 spaCy 比较方便,在ubuntu环境下,直接用pip安装即可:

sudo apt-get install build-essential python-dev git
sudo pip install -U spacy

不过安装完毕之后,需要下载相关的模型数据,以英文模型数据为例,可以用"all"参数下载所有的数据:

sudo python -m spacy.en.download all

或者可以分别下载相关的模型和用glove训练好的词向量数据:


# 这个过程下载英文tokenizer,词性标注,句法分析,命名实体识别相关的模型
python -m spacy.en.download parser

# 这个过程下载glove训练好的词向量数据
python -m spacy.en.download glove

下载好的数据放在spacy安装目录下的data里,以我的ubuntu为例:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data$ du -sh *
776M en-1.1.0
774M en_glove_cc_300_1m_vectors-1.0.0

进入到英文数据模型下:

textminer@textminer:/usr/local/lib/python2.7/dist-packages/spacy/data/en-1.1.0$ du -sh *
424M deps
8.0K meta.json
35M ner
12M pos
84K tokenizer
300M vocab
6.3M wordnet

可以用如下命令检查模型数据是否安装成功:


textminer@textminer:~$ python -c "import spacy; spacy.load('en'); print('OK')"
OK

也可以用pytest进行测试:


# 首先找到spacy的安装路径:
python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
/usr/local/lib/python2.7/dist-packages/spacy

# 再安装pytest:
sudo python -m pip install -U pytest

# 最后进行测试:
python -m pytest /usr/local/lib/python2.7/dist-packages/spacy --vectors --model --slow
============================= test session starts ==============================
platform linux2 -- Python 2.7.12, pytest-3.0.4, py-1.4.31, pluggy-0.4.0
rootdir: /usr/local/lib/python2.7/dist-packages/spacy, inifile:
collected 318 items

../../usr/local/lib/python2.7/dist-packages/spacy/tests/test_matcher.py ........
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_entity_id.py ....
../../usr/local/lib/python2.7/dist-packages/spacy/tests/matcher/test_matcher_bugfixes.py .....
......
../../usr/local/lib/python2.7/dist-packages/spacy/tests/vocab/test_vocab.py .......Xx
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_api.py x...............
../../usr/local/lib/python2.7/dist-packages/spacy/tests/website/test_home.py ............

============== 310 passed, 5 xfailed, 3 xpassed in 53.95 seconds ===============

现在可以快速测试一下spaCy的相关功能,我们以英文数据为例,spaCy目前主要支持英文和德文,对其他语言的支持正在陆续加入:


textminer@textminer:~$ ipython
Python 2.7.12 (default, Jul 1 2016, 15:12:24)
Type "copyright", "credits" or "license" for more information.

IPython 2.4.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.

In [1]: import spacy

# 加载英文模型数据,稍许等待
In [2]: nlp = spacy.load('en')

Word tokenize功能,spaCy 1.2版本加了中文tokenize接口,基于Jieba中文分词:

In [3]: test_doc = nlp(u"it's word tokenize test for spacy")

In [4]: print(test_doc)
it's word tokenize test for spacy

In [5]: for token in test_doc:
print(token)
...:
it
's
word
tokenize
test
for
spacy

英文断句:


In [6]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')

In [7]: for sent in test_doc.sents:
print(sent)
...:
Natural language processing (NLP) deals with the application of computational models to text or speech data.
Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways.
NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form.
From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.


词干化(Lemmatize):


In [8]: test_doc = nlp(u"you are best. it is lemmatize test for spacy. I love these books")

In [9]: for token in test_doc:
print(token, token.lemma_, token.lemma)
...:
(you, u'you', 472)
(are, u'be', 488)
(best, u'good', 556)
(., u'.', 419)
(it, u'it', 473)
(is, u'be', 488)
(lemmatize, u'lemmatize', 1510296)
(test, u'test', 1351)
(for, u'for', 480)
(spacy, u'spacy', 173783)
(., u'.', 419)
(I, u'i', 570)
(love, u'love', 644)
(these, u'these', 642)
(books, u'book', 1011)

词性标注(POS Tagging):


In [10]: for token in test_doc:
print(token, token.pos_, token.pos)
....:
(you, u'PRON', 92)
(are, u'VERB', 97)
(best, u'ADJ', 82)
(., u'PUNCT', 94)
(it, u'PRON', 92)
(is, u'VERB', 97)
(lemmatize, u'ADJ', 82)
(test, u'NOUN', 89)
(for, u'ADP', 83)
(spacy, u'NOUN', 89)
(., u'PUNCT', 94)
(I, u'PRON', 92)
(love, u'VERB', 97)
(these, u'DET', 87)
(books, u'NOUN', 89)

命名实体识别(NER):


In [11]: test_doc = nlp(u"Rami Eid is studying at Stony Brook University in New York")

In [12]: for ent in test_doc.ents:
print(ent, ent.label_, ent.label)
....:
(Rami Eid, u'PERSON', 346)
(Stony Brook University, u'ORG', 349)
(New York, u'GPE', 350)

名词短语提取:


In [13]: test_doc = nlp(u'Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.')

In [14]: for np in test_doc.noun_chunks:
print(np)
....:
Natural language processing
Natural language processing (NLP) deals
the application
computational models
text
speech
data
Application areas
NLP
automatic (machine) translation
languages
dialogue systems
a human
a machine
natural language
information extraction
the goal
unstructured text
structured (database) representations
flexible ways
NLP technologies
a dramatic impact
the way
people
computers
the way
people
the use
language
the way
people
the vast amount
linguistic data
electronic form
a scientific viewpoint
NLP
fundamental questions
formal models
example
natural language phenomena
algorithms
these models

基于词向量计算两个单词的相似度:


In [15]: test_doc = nlp(u"Apples and oranges are similar. Boots and hippos aren't.")

In [16]: apples = test_doc[0]

In [17]: print(apples)
Apples

In [18]: oranges = test_doc[2]

In [19]: print(oranges)
oranges

In [20]: boots = test_doc[6]

In [21]: print(boots)
Boots

In [22]: hippos = test_doc[8]

In [23]: print(hippos)
hippos

In [24]: apples.similarity(oranges)
Out[24]: 0.77809414836023805

In [25]: boots.similarity(hippos)
Out[25]: 0.038474555379008429

当然,spaCy还包括句法分析的相关功能等。另外值得关注的是 spaCy 从1.0版本起,加入了对深度学习工具的支持,例如 Tensorflow 和 Keras 等,这方面具体可以参考官方文档给出的一个对情感分析(Sentiment Analysis)模型进行分析的例子:Hooking a deep learning model into spaCy.

参考:
spaCy官方文档
Getting Started with spaCy

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:自然语言处理工具包spaCy介绍 http://www.52nlp.cn/?p=9386

反向传播算法入门资源索引

Deep Learning Specialization on Coursera

1、一切从维基百科开始,大致了解一个全貌:
反向传播算法 Backpropagation

2、拿起纸和笔,再加上ipython or 计算器,通过一个例子直观感受反向传播算法:
A Step by Step Backpropagation Example

3、再玩一下上篇例子对应的200多行Python代码: Neural Network with Backpropagation

4、有了上述直观的反向传播算法体验,可以从1986年这篇经典的论文入手了:Learning representations by back-propagating errors

5、如果还是觉得晦涩,推荐读一下"Neural Networks and Deep Learning"这本深度学习在线书籍的第二章:How the backpropagation algorithm works

6、或者可以通过油管看一下这个神经网络教程的前几节关于反向传播算法的视频: Neural Network Tutorial

7、hankcs 同学对于上述视频和相关材料有一个解读: 反向传播神经网络极简入门

8、这里还有一个比较简洁的数学推导:Derivation of Backpropagation

9、神牛gogo 同学对反向传播算法原理及代码解读:神经网络反向传播的数学原理

10、关于反向传播算法,更本质一个解释:自动微分反向模式(Reverse-mode differentiation )Calculus on Computational Graphs: Backpropagation

注:原创文章,转载请注明出处及保留链接“我爱自然语言处理”:http://www.52nlp.cn

本文链接地址:反向传播算法入门资源索引 http://www.52nlp.cn/?p=9350