wangjiong's blog

everyday is new day


  • 首页

  • 归档

  • 分类

  • 标签

机器学习之特征选择

发表于 2017-06-29 | 分类于 机器学习

机器学习第一步是特征选择

react-router源码分析

发表于 2017-06-28 | 分类于 react

版本是master

示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
import React from 'react'
import {
BrowserRouter as Router,
Route,
Link
} from 'react-router-dom'
const Home = () => (
<div>
<h2>Home</h2>
</div>
)
const About = () => (
<div>
<h2>About</h2>
</div>
)
const AboutAuthor = () => (
<div>
<h2>AboutAuthor</h2>
</div>
)
const AboutTopic = () => (
<div>
<h2>AboutTopic</h2>
</div>
)
const Topic = ({ match }) => (
<div>
<h3>{match.params.topicId}</h3>
</div>
)
const Topics = ({ match }) => (
<div>
<h2>Topics</h2>
<ul>
<li>
<Link to={`${match.url}/rendering`}>
Rendering with React
</Link>
</li>
<li>
<Link to={`${match.url}/components`}>
Components
</Link>
</li>
<li>
<Link to={`${match.url}/props-v-state`}>
Props v. State
</Link>
</li>
</ul>
<Route path={`${match.url}/:topicId`} component={Topic}/>
<Route exact path={match.url} render={() => (
<h3>Please select a topic.</h3>
)}/>
</div>
)
const BasicExample = () => (
<Router>
<div>
<ul>
<li><Link to="/">Home</Link></li>
<li><Link to="/about">About</Link></li>
<li><Link to="/topics">Topics</Link></li>
</ul>
<hr/>
<Route exact path="/" component={Home}/>
<Route path="/about" component={About}/>
<Route path="/topics" component={Topics}/>
</div>
</Router>
)
export default BasicExample

Router本身是一个react组件,这点也可以从源码中看出来

源码

Router Component

history用于取location,
state的match存储了一个有关当前pathname的对象,Router的children是Route,用于存储要渲染的Component,判断该Component是否渲染是在Route内判断的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
import warning from 'warning'
import invariant from 'invariant'
import React from 'react'
import PropTypes from 'prop-types'
/**
* The public API for putting history on context.
*/
class Router extends React.Component {
static propTypes = {
history: PropTypes.object.isRequired,
children: PropTypes.node
}
static contextTypes = {
router: PropTypes.object
}
static childContextTypes = {
router: PropTypes.object.isRequired
}
getChildContext() {
return {
router: {
...this.context.router,
history: this.props.history,
route: {
location: this.props.history.location,
match: this.state.match
}
}
}
}
state = {
match: this.computeMatch(this.props.history.location.pathname)
}
computeMatch(pathname) {
return {
path: '/',
url: '/',
params: {},
isExact: pathname === '/'
}
}
componentWillMount() {
const { children, history } = this.props
invariant(
children == null || React.Children.count(children) === 1,
'A <Router> may have only one child element'
)
// Do this here so we can setState when a <Redirect> changes the
// location in componentWillMount. This happens e.g. when doing
// server rendering using a <StaticRouter>.
this.unlisten = history.listen(() => {
this.setState({
match: this.computeMatch(history.location.pathname)
})
})
}
componentWillReceiveProps(nextProps) {
warning(
this.props.history === nextProps.history,
'You cannot change <Router history>'
)
}
componentWillUnmount() {
this.unlisten()
}
render() {
const { children } = this.props
return children ? React.Children.only(children) : null
}
}
export default Router

Route Component

Route用于匹配一个路径并渲染Component
route渲染流程图

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
import warning from 'warning'
import React from 'react'
import PropTypes from 'prop-types'
import matchPath from './matchPath'
/**
* The public API for matching a single path and rendering.
*/
class Route extends React.Component {
static propTypes = {
computedMatch: PropTypes.object, // private, from <Switch>
path: PropTypes.string,
exact: PropTypes.bool,
strict: PropTypes.bool,
component: PropTypes.func,
render: PropTypes.func,
children: PropTypes.oneOfType([
PropTypes.func,
PropTypes.node
]),
location: PropTypes.object
}
static contextTypes = {
router: PropTypes.shape({
history: PropTypes.object.isRequired,
route: PropTypes.object.isRequired,
staticContext: PropTypes.object
})
}
static childContextTypes = {
router: PropTypes.object.isRequired
}
getChildContext() {
return {
router: {
...this.context.router,
route: {
location: this.props.location || this.context.router.route.location,
match: this.state.match
}
}
}
}
state = {
match: this.computeMatch(this.props, this.context.router)
}
computeMatch({ computedMatch, location, path, strict, exact }, { route }) {
if (computedMatch)
return computedMatch // <Switch> already computed the match for us
const pathname = (location || route.location).pathname
return path ? matchPath(pathname, { path, strict, exact }) : route.match
}
componentWillMount() {
const { component, render, children } = this.props
warning(
!(component && render),
'You should not use <Route component> and <Route render> in the same route; <Route render> will be ignored'
)
warning(
!(component && children),
'You should not use <Route component> and <Route children> in the same route; <Route children> will be ignored'
)
warning(
!(render && children),
'You should not use <Route render> and <Route children> in the same route; <Route children> will be ignored'
)
}
componentWillReceiveProps(nextProps, nextContext) {
warning(
!(nextProps.location && !this.props.location),
'<Route> elements should not change from uncontrolled to controlled (or vice versa). You initially used no "location" prop and then provided one on a subsequent render.'
)
warning(
!(!nextProps.location && this.props.location),
'<Route> elements should not change from controlled to uncontrolled (or vice versa). You provided a "location" prop initially but omitted it on a subsequent render.'
)
this.setState({
match: this.computeMatch(nextProps, nextContext.router)
})
}
render() {
const { match } = this.state
const { children, component, render } = this.props
const { history, route, staticContext } = this.context.router
const location = this.props.location || route.location
const props = { match, location, history, staticContext }
return (
component ? ( // 先判断Route是否有component props
// 判断是否匹配
match ? React.createElement(component, props) : null
) : render ? ( // render prop is next, only called if there's a match
match ? render(props) : null
) : children ? ( // children come last, always called
typeof children === 'function' ? (
children(props)
) : !Array.isArray(children) || children.length ? ( // Preact defaults to empty children array
React.Children.only(children)
) : (
null
)
) : (
null
)
)
}
}
export default Route

matchPath匹配路径

1、path-to-regexp把路径path转换成正则对象,并存储
2、正则匹配

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
import pathToRegexp from 'path-to-regexp'
// var re = pathToRegexp('/foo/:bar', keys)
// re = /^\/foo\/([^\/]+?)\/?$/i
const patternCache = {}
const cacheLimit = 10000
let cacheCount = 0
const compilePath = (pattern, options) => {
const cacheKey = `${options.end}${options.strict}`
const cache = patternCache[cacheKey] || (patternCache[cacheKey] = {})
if (cache[pattern])
return cache[pattern]
const keys = []
const re = pathToRegexp(pattern, keys, options)
const compiledPattern = { re, keys }
if (cacheCount < cacheLimit) {
cache[pattern] = compiledPattern
cacheCount++
}
return compiledPattern
}
/**
* Public API for matching a URL pathname to a path pattern.
*/
const matchPath = (pathname, options = {}) => {
if (typeof options === 'string')
options = { path: options }
const { path = '/', exact = false, strict = false } = options
const { re, keys } = compilePath(path, { end: exact, strict })
// 匹配路径
const match = re.exec(pathname)
if (!match)
return null
const [ url, ...values ] = match
const isExact = pathname === url
if (exact && !isExact)
return null
return {
path, // the path pattern used to match
url: path === '/' && url === '' ? '/' : url, // the matched portion of the URL
isExact, // whether or not we matched exactly
params: keys.reduce((memo, key, index) => {
memo[key.name] = values[index]
return memo
}, {})
}
}
export default matchPath

其它

Prompt:用于页面跳转时的提示
Redirect:在组件内跳转location
StaticRouter:用于服务器端渲染

内存管理速成

发表于 2017-06-22

机器学习-参数求解的三种方法

发表于 2017-06-16 | 分类于 机器学习

机器学习参数求解最常用的三种方法:最小二乘法,梯度下降法,最速牛顿法。

最小二乘法

用偏差平方和最小的原则拟合曲线,最小二乘法。
y=a0+a1x+...+akxky = a_{0} + a_{1}x + ... + a_{k}x^{k}y=a​0​​+a​1​​x+...+a​k​​x​k​​
假设多项式为
y=a0+a1x+...+akxky = a_{0} + a_{1}x + ... + a_{k}x^{k}y=a​0​​+a​1​​x+...+a​k​​x​k​​
求偏差平方和,即各已知点(xi,yi)(x_{i}, y_{i})(x​i​​,y​i​​)到这条曲线的距离之和
R2=∑ni=1[yi−(a0+a1x1+...+akxik)]2R^{2} = \sum_{n}^{i=1} [y_{i} - (a_{0} + a_{1}x_{1} + ... + a_{k}x_{i}^{k})]^{2}R​2​​=∑​n​i=1​​[y​i​​−(a​0​​+a​1​​x​1​​+...+a​k​​x​i​k​​)]​2​​
我们的目的是R2R^{2}R​2​​最小,对aia_{i}a​i​​求偏导得
−2∑ni=1[y−(a0+a1x1+...+akxik)]-2 \sum_{n}^{i=1} [y - (a_{0} + a_{1}x_{1} + ... + a_{k}x_{i}^{k})]−2∑​n​i=1​​[y−(a​0​​+a​1​​x​1​​+...+a​k​​x​i​k​​)]

TensorBoard可视化学习

发表于 2017-06-16 | 分类于 机器学习

TensorBoard可视化是通过读取TensorFlow的事件文件来工作。TensorFlow 的事件文件包括了你会在 TensorFlow 运行中涉及到的主要数据。

创建TensorFlow图,选择节点输出数据

执行操作,汇总数据

合并操作

使用TensorFlow的softmax回归识别手写数字图片

发表于 2017-06-15 | 分类于 机器学习

MNIST数据集

数据集中每个数据单元由一张图片和一个对应标签组成。图片包含28*28个像素点

每个像素点的值介于0-1之间,

Softmax回归

Softmax回归是Logistic回归的推广,Logistic回归用于处理二分类问题,Softmax回归可以处理多分类问题。
Logistic函数(或称为Sigmoid函数)
g(z)=11+e−z g(z) = \frac{1}{1 + e^{-z}} g(z)=​1+e​−z​​​​1​​

线性函数
θ0+θ1x1+...+θnxn=∑i=0nθixi=θTx \theta _{0} + \theta _{1}x_{1} + ... + \theta _{n}x_{n} = \sum_{i=0}^{n} \theta _{i}x_{i} = \theta ^{T}x θ​0​​+θ​1​​x​1​​+...+θ​n​​x​n​​=∑​i=0​n​​θ​i​​x​i​​=θ​T​​x
代入g(z)
hθ(x)=g(θTx)=11+e−θTx h _{\theta}(x) = g(\theta ^{T}x) = \frac{1}{1 + e ^{-\theta ^{T}x}} h​θ​​(x)=g(θ​T​​x)=​1+e​−θ​T​​x​​​​1​​
可推导
P(y=1∣x;θ)=hθ(x) P(y=1|x;\theta) = h _{\theta}(x) P(y=1∣x;θ)=h​θ​​(x)
P(y=0∣x;θ)=1−hθ(x) P(y=0|x;\theta) = 1 - h _{\theta}(x) P(y=0∣x;θ)=1−h​θ​​(x)

使用最大似然估计进行参数估计,假设所有样本独立同分布,
P(y∣x;θ)=(hθ(x))y(1−hθ(x))1−y P(y|x;\theta) = (h _{\theta}(x)) ^{y} (1 - h _{\theta}(x)) ^ {1-y} P(y∣x;θ)=(h​θ​​(x))​y​​(1−h​θ​​(x))​1−y​​
似然函数
L(θ)=∏i=1mP(y(i)∣x(i);θ)=∏i=1m(hθ(x(i)))y(i)(1−hθ(x(i)))1−y(i) L(\theta) = \prod_{i=1}^{m}P(y ^{(i)} | x^{(i)}; \theta) = \prod_{i=1}^{m} (h _{\theta}(x ^{(i)})) ^{y ^{(i)}} (1 - h_{\theta}(x ^{(i)})) ^{1 - y ^{(i)}} L(θ)=∏​i=1​m​​P(y​(i)​​∣x​(i)​​;θ)=∏​i=1​m​​(h​θ​​(x​(i)​​))​y​(i)​​​​(1−h​θ​​(x​(i)​​))​1−y​(i)​​​​
对数似然函数
l(θ)=logL(θ)=∑i=1m((y(i))log(hθ(x(i)))+(1−y(i))log(1−hθ(x(i)))) l(\theta) = log L(\theta) = \sum_{i=1}^{m} ( (y^{(i)})log(h_{\theta}(x^{(i)})) + (1-y^{(i)})log(1-h_{\theta}(x^{(i)}))) l(θ)=logL(θ)=∑​i=1​m​​((y​(i)​​)log(h​θ​​(x​(i)​​))+(1−y​(i)​​)log(1−h​θ​​(x​(i)​​)))
最大似然估计是求l(θ)l(\theta)l(θ)取最大值时的θ。可以使用梯度下降法求。 图片分类需要识别0-9,使用Softmax回归取得每个数字的概率。 p(y=0∣x)=11+e(wTx+b) p(y=0|x) = \frac{1}{1 + e^{(w^{T}x+b)}} p(y=0∣x)=​1+e​(w​T​​x+b)​​​​1​​

神经网络基础

发表于 2017-06-14 | 分类于 机器学习

DNN,Deep Neural Networks,深度神经网络
对于人工智能的实现分为两派:一派是自顶向下通过逻辑和符号推导实现,另一类是自底向上通过模拟大脑中的神经网络实现。神经网络是第二派的实现。
神经网络由若干神经元组成

神经元


输入:x1,x2,…xn
权重:w1,w2,…wn
组合函数:c
激活函数:a
偏移:b
输出:y

感知机

神经元最早的起源于上世纪50-60年代,Frank Rosenblatt发明了一种叫感知机的神经元。感知机的输入输出是二进制0或1。

sigmoid神经元

在使用感知机构建神经网络时,神经元权重和偏移发生很小变化都可能导致输出的剧烈变化。所以引入了sigmoid神经元。sigmoid神经元的输入输出是浮点值。
感知机

sigmoid神经元

sigmoid神经元相当于平滑的感知机,意味着当权重和偏移变化时,输出按预期小幅度变化。

神经网络的结构

输入层–>隐层–>输出层

隐层的设计需要考虑层数和时间的平衡。

使用tf.contrib.learn快速搭建DNN识别鸢尾花

发表于 2017-06-13 | 分类于 机器学习

tf.contrib.learn是TensorFlow提供高级API。下面是使用DNN预测鸢尾花卉数据集的例子。
先分析下鸢尾花卉数据集,0/1/2分别代表Setosa,versicolor,virginica三个种类的花

Sepal.Length(花萼长度) Sepal.Width(花萼宽度) Petal.Length(花瓣长度) Petal.Width(花瓣宽度) 种类
7.9 3.8 6.4  2.0 0/1/2

1、载入数据
2、构造神经网络分类器
3、利用训练数据拟合模型
4、评估模型的精确性
5、新的样本分类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"
# 加载数据集
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)
test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)
print(training_set)
# 特征
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# 构造3层DNN网络,每层分别是10,20,10个节点
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
# 拟合模型,迭代2000步
classifier.fit(x=training_set.data,
y=training_set.target,
steps=2000)
# 计算精度
accuracy_score = classifier.evaluate(x=test_set.data,
y=test_set.target)["accuracy"]
print('Accuracy: {0:f}'.format(accuracy_score))
# 对2个新样本预测
new_samples = np.array(
[[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float)
y = list(classifier.predict(new_samples, as_iterable=True))
print('Predictions: {}'.format(str(y)))

输出

1
2
Accuracy: 0.966667
Predictions: [1, 1]

TensorFlow基础使用

发表于 2017-06-11 | 分类于 机器学习

基础使用

TensorFlow几个要点
1、使用图graph来代表计算
2、在Sessions环境中执行图
3、使用tensors代表数据
4、使用Variables变量维护状态
5、使用feeds和fetches获取或写入操作

图的使用

构建下图

构建图

1
2
3
4
5
6
import tensorflow as tf
# 声明常量
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
# 声明操作
product = tf.matmul(matrix1, matrix2)

启动图

1
2
3
4
5
6
7
8
# 启动默认图
sess = tf.Session()
# 在图中执行 操作
result = sess.run(product)
print(result)
# ==> [[ 12.]]
# 关闭图
sess.close()

Sessions自动释放资源

1
2
3
with tf.Session() as sess:
result = sess.run([product])
print(result)

在Python解释器中使用

在解释器中可以使用InteractiveSession class, Tensor.eval() ,Operation.run(),避免使用变量保存session

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Enter an interactive TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print(sub.eval())
# ==> [-2. -1.]
# Close the Session when we're done.
sess.close()

Tensors

TensorFlow使用tensor数据结构表示所有数据,在不同操作之间只能传入tensor数据。tensor类似N维数组,一个tensor包括一个数据类型,一个rank(阶,张量的维数)和一个shape(形状,张量的维度)

阶 形状 维数 实例
0 [] 0-D 纯量 s = 483
1 [D0] 1-D 向量 v = [1.1, 2.2, 3.3]
2 [D0,D1] 2-D 矩阵 m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
3 [D0,D1,D2] 3-D 3阶张量 t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18]]]
n [D0,…Dn] 4-D n阶

变量

变量存储图的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import tensorflow as tf
# 创建变量state,并初始化为0
state = tf.Variable(0, name="counter")
# 创建常量one,值为1
one = tf.constant(1)
# 加操作
new_value = tf.add(state, one)
# 赋值 new_value给state
update = tf.assign(state, new_value)
# 初始化所有变量
init_op = tf.global_variables_initializer()
# 启动图,执行操作
with tf.Session() as sess:
# 执行init_op操作
sess.run(init_op)
# 打印state初始值
print(sess.run(state))
# 执行更新state的操作,并打印
for _ in range(3):
sess.run(update)
print(sess.run(state))
# output:
# 0
# 1
# 2
# 3

sess.run(init_op)之前并没有执行任何操作,所以state为0。sess.run(update)执行操作才会更新state值。

Fetches

用于取出操作的结果。尽量在一次操作运行中取出多个tensor,提高效率。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import tensorflow as tf
input1 = tf.constant([3.0])
input2 = tf.constant([2.0])
input3 = tf.constant([5.0])
# 加
intermed = tf.add(input2, input3)
# 乘
mul = tf.multiply(input1, intermed)
# (input2 + input3) * input1
with tf.Session() as sess:
result = sess.run([mul, intermed])
print(result)
# output:
# [array([ 21.], dtype=float32), array([ 7.], dtype=float32)]

Feeds

用于临时保存tensor值,调用方法结束后,feed消失

1
2
3
4
5
6
7
8
9
10
11
12
import tensorflow as tf
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
# 乘
output = tf.multiply(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1:[7.], input2:[2.]}))
# 输出:
# [array([ 14.], dtype=float32)]

TensorFlow入门

发表于 2017-06-10 | 分类于 机器学习

一个使用梯度下降实现线性回归的例子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import tensorflow as tf
import numpy as np
# 使用NumPy生成100个随机数, y = x * 0.1 + 0.3
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.1 + 0.3
# 目的是求出方程y_data = W * x_data + b的W和b的值
# 声明变量W和b
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
# 求最小均方差
# reduce_mean 求平均值
# square 求平方
# 梯度下降优化
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 使用梯度下降算法求loss最小值
train = optimizer.minimize(loss)
# 对所有变量进行初始化
init = tf.global_variables_initializer()
# 启动graph,执行阶段
sess = tf.Session()
sess.run(init)
# 拟合直线
for step in range(501):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
# 最佳结果是W: [0.1], b: [0.3]
# 结束后关闭Session
sess.close()

终端输出:

1
2
3
4
5
6
7
8
9
10
11
0 [ 0.86394864] [-0.13441049]
20 [ 0.34454051] [ 0.1738376]
40 [ 0.17663138] [ 0.26046464]
60 [ 0.12401388] [ 0.28761086]
80 [ 0.10752521] [ 0.29611763]
100 [ 0.10235816] [ 0.29878339]
120 [ 0.10073899] [ 0.29961875]
140 [ 0.10023157] [ 0.29988053]
160 [ 0.10007257] [ 0.29996258]
180 [ 0.10002275] [ 0.29998827]
200 [ 0.10000715] [ 0.29999632]

TensorFlow的几个概念

Session:用于执行graph的上下文环境
Graph:计算任务,由许多节点组成
节点:代表不同的操作,被分配到cpu或gpu执行
Variable:变量

TensorFlow分两个阶段,构建阶段和执行阶段

梯度下降

先随机对W赋值,然后改变W的值,使loss按梯度下降的方向进行减少

123
Wang Jiong

Wang Jiong

happy coding

25 日志
4 分类
10 标签
© 2018 Wang Jiong
由 Hexo 强力驱动
主题 - NexT.Muse