-
Notifications
You must be signed in to change notification settings - Fork 0
/
search.xml
421 lines (200 loc) · 379 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title>es学习系列之二:Aggregation collect mode and execution hint.md</title>
<link href="/2020/08/16/es%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%BA%8C%EF%BC%9AAggregation-collect-mode-and-execution-hint-md.html"/>
<url>/2020/08/16/es%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%BA%8C%EF%BC%9AAggregation-collect-mode-and-execution-hint-md.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文讨论的内容均基于 es 7.*</p></blockquote><h2 id="Term-Aggregation的深度优先以及广度优先"><a href="#Term-Aggregation的深度优先以及广度优先" class="headerlink" title="Term Aggregation的深度优先以及广度优先"></a>Term Aggregation的深度优先以及广度优先</h2><p>Term aggregation 是我们常用的聚合查询,对于有 <strong>父子聚合</strong> 的场景下,理解其执行父聚合以及子聚合的时机对我们优化聚合查询有很大的帮助,collect mode就指定了它们的执行时机,共有两种模式,一种是depth first,另一种是breadth first。</p><h3 id="depth-first"><a href="#depth-first" class="headerlink" title="depth first"></a>depth first</h3><p>一般来说,depth first适合大多数场景,因为大多数情况下需要聚合的字段不会是大量的唯一值。</p><ul><li>聚合过程<ul><li>1.先计算出一个桶,然后再根据这个桶计算出子聚合的结果,构建出一棵聚合树。然后不断重复前面的过程,直到所有的桶计算完毕。</li><li>2.对步骤一计算出的结果进行排序,也就是对各个聚合树进行排序。</li><li>3.根据过滤条件和 size 等参数修剪结果。</li></ul></li><li>适合场景<ul><li>大多数term都是重复的</li><li>要求返回的aggs size比较大</li><li>原因:不需要缓存父聚合的doc_id,直接聚合成一棵棵聚合树,每棵聚合树的每个节点数据结构为(value, doc_count),并且大多数的聚合树不会被修剪</li></ul></li><li>聚合计算下,多层聚合会让一个文档与其他文档产生关联,也就是说会形成一棵棵聚合树。深度优先就是先计算获得所有的聚合树,然后再进行后续处理。</li></ul><h3 id="breadth-first"><a href="#breadth-first" class="headerlink" title="breadth first"></a>breadth first</h3><ul><li>聚合过程<ul><li>1.先计算出第一层聚合的结果。</li><li>2.根据步骤一得出的结果进行排序。</li><li>3.根据过滤条件和 size 等参数修整第一层节点。</li><li>4.根据各个节点进行后续的子聚合计算。</li></ul></li><li>适合场景<ul><li>大多数term都是唯一的</li><li>要求返回的aggs size 比较小</li><li>原因:缓存父聚合的doc_id,聚合树第一层节点,即根节点,的数据结构为(value, doc_count, set(doc_id)),为了保证使用的缓存尽量小,则 avg(doc_count_per_bucket) * size(buckets) 尽量小</li></ul></li><li>对于基数大于请求的大小的字段或基数未知的字段(例如,数字字段或脚本),默认值为breadth_first</li><li>当使用breadth_first模式时,属于最上层存储桶的文档集将被缓存以供后续重播,因此这样做会产生内存开销,该开销与匹配文档的数量成线性关系。</li><li>使用广度优先设置时,仍可以使用order参数来引用子聚合中的数据。父级聚合知道需要先调用此子级聚合,然后再调用其他任何子级聚合。<ul><li>这里的主要意思是,如果父聚合的排序时候使用的是子聚合的结果,则会在执行父聚合前先执行该子聚合。</li></ul></li></ul><h3 id="depth-first-vs-breadth-first"><a href="#depth-first-vs-breadth-first" class="headerlink" title="depth first vs. breadth first"></a>depth first vs. breadth first</h3><ul><li>关键点:<ul><li>父子聚合的组合数的多少 + 返回buckets数量大小(分别对应计算的难度+使用率)<ul><li>父聚合字段的基数大,父子聚合的组合数多,需要返回的buckets数量小,适合广度优先</li><li>父聚合字段的基数小,父子聚合的组合数少,需要返回的buckets数量大,适合深度优先</li></ul></li><li>比如:<ul><li>父聚合字段有10000,父子聚合的组合数约为 100000,需要返回buckets为5,则可能适合广度优先</li><li>父聚合字段有10,父子聚合的组合数约为 100,需要返回buckets为100,则可能适合深度优先</li></ul></li></ul></li></ul><h2 id="Term-Aggregation的Execution-hint"><a href="#Term-Aggregation的Execution-hint" class="headerlink" title="Term Aggregation的Execution hint"></a>Term Aggregation的Execution hint</h2><h3 id="global-ordinals"><a href="#global-ordinals" class="headerlink" title="global_ordinals"></a>global_ordinals</h3><p>我们知道doc values存储的时候,会对应original value分配一个ordinal以减小磁盘的使用,也减小聚合过程中内存的使用。</p><ul><li>聚合过程<ul><li>segment级别的聚合:<ul><li>在聚合字段的doc values数据结构中,完成 (doc_id, ordinal) -> (ordinal, set(doc_id)) 的聚合计算</li></ul></li><li>shard级别的聚合:<ul><li>在shard内部多个segment之上构建 global ordinla map,其数据结构为 (segment_id, ordinal, global ordinal),当ordinal对应的初始值相同时,其对应的global ordinal也相同。</li><li>根据 global ordinla map,完成 (ordinal, set(doc_id)) -> (global ordinal, set(doc_id) 的转换。</li><li>根据 global ordinal 进行分桶,并根据doc count进行排序,选出前n个桶,完成 (global ordinal, set(doc_id) -> (global ordinal, doc count) 的聚合计算。</li><li>根据 global ordinal map以及segment的doc values,把global ordinal替换成原始值,完成 (global ordinal, doc count) -> (segment_id, ordinal, doc count) -> (original value, doc count) 的转换。</li></ul></li><li>index级别的聚合:<ul><li>在协调节点完成全局前n个结果的聚合。</li></ul></li></ul></li><li>global ordinals 的有效性<ul><li>因为global ordinals为shard上的所有segment提供了统一的map,所以当新的segment变为可见时(常见为refresh的时候),还需要完全重建它们。所以,global ordinals 更加适用于 <strong>历史数据</strong>。</li></ul></li></ul><h3 id="map"><a href="#map" class="headerlink" title="map"></a>map</h3><p>相对而言map更加简单,主要的不同点在于shard级别聚合的时候不再构建global ordinal map,而是直接返回original value到shard</p><ul><li>聚合过程<ul><li>segment级别的聚合:<ul><li>在聚合字段的doc values数据结构中,完成 (doc_id, ordinal) -> (ordinal, set(doc_id)) 的聚合计算</li><li>把ordinal替换成初始值,完成 (ordinal, set(doc_id)) -> (original, set(doc_id)) 的转换</li></ul></li><li>shard级别的聚合:<ul><li>根据 original value进行分桶,并根据doc count进行排序,选出前n个桶,完成 (original, set(doc_id) -> (original, doc count) 的聚合计算。</li></ul></li><li>index级别的聚合:<ul><li>在协调节点完成全局前n个结果的聚合。</li></ul></li></ul></li></ul><h3 id="global-ordinals-vs-map"><a href="#global-ordinals-vs-map" class="headerlink" title="global_ordinals vs. map"></a>global_ordinals vs. map</h3><ul><li>适合global_ordinals模式<ul><li>聚合字段基数不大</li><li>refresh 间隔比较大</li><li>不再写入的索引,比如历史索引</li><li>开启 eager_global_ordinals 配置,在写入索引时就构建global ordinals map。但是可能会对写入索引的速度有影响</li></ul></li><li>适合map模式<ul><li>聚合字段基数很大</li><li>需要注意可能会引起更大的内存消耗量</li></ul></li></ul><a id="more"></a><h2 id="实战演练"><a href="#实战演练" class="headerlink" title="实战演练"></a>实战演练</h2><h3 id="冷数据的聚合"><a href="#冷数据的聚合" class="headerlink" title="冷数据的聚合"></a>冷数据的聚合</h3><p>可以看到我们针对冷数据的聚合,我们在数据量接近8kw时,global ordinal的聚合速度要比map更快。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br></pre></td><td class="code"><pre><span class="line"># 2020-08-16 18:00:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># 接近 8kw 条数据的聚合</span><br><span class="line">GET /co'd'l-2020.08.16-000120/_count</span><br><span class="line">{</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597572000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">返回:</span><br><span class="line">{</span><br><span class="line"> "count" : 79595834,</span><br><span class="line"> "_shards" : {</span><br><span class="line"> "total" : 6,</span><br><span class="line"> "successful" : 6,</span><br><span class="line"> "skipped" : 0,</span><br><span class="line"> "failed" : 0</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 2020-08-16 18:00:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># global ordinals 模式</span><br><span class="line"># 耗时: </span><br><span class="line"># 17207ms</span><br><span class="line"># 17089ms</span><br><span class="line"># 16624ms</span><br><span class="line"># 平均耗时: 16973ms</span><br><span class="line">GET /cold_data-2020.08.16-000120/_search?request_cache=false</span><br><span class="line">{</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "requestIDs": {</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "max": {</span><br><span class="line"> "field": "startTimeMillis"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "terms": {</span><br><span class="line"> "field": "requestID",</span><br><span class="line"> "execution_hint": "global_ordinals", </span><br><span class="line"> "order": [</span><br><span class="line"> {</span><br><span class="line"> "startTimeMillis": "desc"</span><br><span class="line"> }</span><br><span class="line"> ],</span><br><span class="line"> "size": 10</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597572000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "size": 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 2020-08-16 18:00:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># map 模式</span><br><span class="line"># 耗时: </span><br><span class="line"># 26452ms</span><br><span class="line"># 26405ms</span><br><span class="line"># 26747ms</span><br><span class="line"># 平均耗时: 26534ms</span><br><span class="line">GET /cold_data-2020.08.16-000120/_search?request_cache=false</span><br><span class="line">{</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "requestIDs": {</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "max": {</span><br><span class="line"> "field": "startTimeMillis"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "terms": {</span><br><span class="line"> "field": "requestID",</span><br><span class="line"> "execution_hint": "map", </span><br><span class="line"> "order": [</span><br><span class="line"> {</span><br><span class="line"> "startTimeMillis": "desc"</span><br><span class="line"> }</span><br><span class="line"> ],</span><br><span class="line"> "size": 10</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597572000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "size": 0</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="热数据"><a href="#热数据" class="headerlink" title="热数据"></a>热数据</h3><p>可以看到我们针对热数据的聚合,我们在数据量接近3kw时,map的聚合速度要比global ordinla更快。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br><span class="line">113</span><br><span class="line">114</span><br><span class="line">115</span><br><span class="line">116</span><br><span class="line">117</span><br><span class="line">118</span><br><span class="line">119</span><br><span class="line">120</span><br><span class="line">121</span><br><span class="line">122</span><br><span class="line">123</span><br><span class="line">124</span><br><span class="line">125</span><br><span class="line">126</span><br><span class="line">127</span><br><span class="line">128</span><br><span class="line">129</span><br><span class="line">130</span><br><span class="line">131</span><br><span class="line">132</span><br><span class="line">133</span><br><span class="line">134</span><br><span class="line">135</span><br><span class="line">136</span><br><span class="line">137</span><br><span class="line">138</span><br><span class="line">139</span><br><span class="line">140</span><br><span class="line">141</span><br><span class="line">142</span><br><span class="line">143</span><br><span class="line">144</span><br><span class="line">145</span><br><span class="line">146</span><br><span class="line">147</span><br><span class="line">148</span><br><span class="line">149</span><br><span class="line">150</span><br><span class="line">151</span><br></pre></td><td class="code"><pre><span class="line"># 2020-08-16 20:30:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># 接近 3kw 条数据的聚合</span><br><span class="line">GET /hot_data-2020.08.16-000121/_count</span><br><span class="line">{</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597581000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line">返回: </span><br><span class="line">{</span><br><span class="line"> "count" : 31648173,</span><br><span class="line"> "_shards" : {</span><br><span class="line"> "total" : 6,</span><br><span class="line"> "successful" : 6,</span><br><span class="line"> "skipped" : 0,</span><br><span class="line"> "failed" : 0</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 2020-08-16 20:30:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># global ordinals 模式</span><br><span class="line"># 耗时: </span><br><span class="line"># 15256ms</span><br><span class="line"># 16699ms</span><br><span class="line"># 14630ms</span><br><span class="line"># 平均耗时: 15528ms</span><br><span class="line">GET /hot_data-2020.08.16-000121/_search?request_cache=false</span><br><span class="line">{</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "requestIDs": {</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "max": {</span><br><span class="line"> "field": "startTimeMillis"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "terms": {</span><br><span class="line"> "field": "requestID",</span><br><span class="line"> "execution_hint": "global_ordinals", </span><br><span class="line"> "order": [</span><br><span class="line"> {</span><br><span class="line"> "startTimeMillis": "desc"</span><br><span class="line"> }</span><br><span class="line"> ],</span><br><span class="line"> "size": 10</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597581000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "size": 0</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 2020-08-16 20:30:00 ~ 2020-08-16 21:00:00</span><br><span class="line"># map 模式</span><br><span class="line"># 耗时: </span><br><span class="line"># 12908ms</span><br><span class="line"># 11807ms</span><br><span class="line"># 10977ms</span><br><span class="line"># 平均耗时: 11897ms</span><br><span class="line">GET /hot_data-2020.08.16-000121/_search?request_cache=false</span><br><span class="line">{</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "requestIDs": {</span><br><span class="line"> "aggregations": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "max": {</span><br><span class="line"> "field": "startTimeMillis"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "terms": {</span><br><span class="line"> "field": "requestID",</span><br><span class="line"> "execution_hint": "map", </span><br><span class="line"> "order": [</span><br><span class="line"> {</span><br><span class="line"> "startTimeMillis": "desc"</span><br><span class="line"> }</span><br><span class="line"> ],</span><br><span class="line"> "size": 10</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "query": {</span><br><span class="line"> "bool": {</span><br><span class="line"> "must": [</span><br><span class="line"> {</span><br><span class="line"> "range": {</span><br><span class="line"> "startTimeMillis": {</span><br><span class="line"> "from": 1597581000000,</span><br><span class="line"> "include_lower": true,</span><br><span class="line"> "include_upper": true,</span><br><span class="line"> "to": 1597582800000</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> {</span><br><span class="line"> "match": {</span><br><span class="line"> "process.serviceName": {</span><br><span class="line"> "query": "nginx"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> ]</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "size": 0</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="小结"><a href="#小结" class="headerlink" title="小结"></a>小结</h3><blockquote><p>需要注意这种并不算是普通的父子聚合查询,因为子聚合是父聚合的排序字段,所以就没有对collect mode进行讨论。</p></blockquote><p>在冷数据的情况下,对大基数字段聚合时,global ordinal几乎都要比map模式要快。但是热数据情况下,map模式却不一定比global ordinal 模式快,因为map可能会耗损大量的内存。比如上面的热数据,在数据量接近8kw时,map反而比global ordinal更慢。</p><p>global ordinal 和 map 并不是绝对的,往往跟数据的实际情况(基数大小、字段类型)紧密相关,需要根据实际情况进行调参。</p><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>本文首先讨论了term aggregation的在父子聚合查询下collecto mode参数对聚合查询的影响,并分析了深度优先和广度优先的区别。还讨论了executrion hit对大基数字段聚合的影响,并总结了适用的场景。</p><h2 id="参考"><a href="#参考" class="headerlink" title="参考"></a>参考</h2><p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/doc-values.html" target="_blank" rel="noopener">doc-values</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/_deep_dive_on_doc_values.html" target="_blank" rel="noopener">Deep Dive on Doc Values</a><br><a href="https://my.oschina.net/bingzhong/blog/1917915" target="_blank" rel="noopener">Elasticsearch聚合——Bucket Aggregations</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/shard-request-cache.html" target="_blank" rel="noopener">Shard request cache</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/eager-global-ordinals.html#_what_are_global_ordinals" target="_blank" rel="noopener">eager_global_ordinals</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/tune-for-search-speed.html#tune-for-search-speed" target="_blank" rel="noopener">Tune for search speed</a><br><a href="https://elasticsearch.cn/question/1797" target="_blank" rel="noopener">Elasticsearch聚合操作的时间复杂度是O(n)吗?</a><br><a href="https://blog.csdn.net/zwgdft/article/details/83215977" target="_blank" rel="noopener">聚合查询越来越慢?——详解Elasticsearch的Global Ordinals与High Cardinality</a><br><a href="https://www.elastic.co/cn/blog/improving-the-performance-of-high-cardinality-terms-aggregations-in-elasticsearch" target="_blank" rel="noopener">Improving the performance of high-cardinality terms aggregations</a></p><blockquote><p>本文为学习过程中产生的总结,由于学艺不精可能有些观点或者描述有误,还望各位同学帮忙指正,共同进步。</p></blockquote>]]></content>
<tags>
<tag> elasticsearch </tag>
<tag> aggregation </tag>
</tags>
</entry>
<entry>
<title>python2中format和%拼接字符串的异同</title>
<link href="/2020/08/08/python2%E4%B8%ADformat%E5%92%8C%E7%99%BE%E5%88%86%E5%8F%B7%E6%8B%BC%E6%8E%A5%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%9A%84%E5%BC%82%E5%90%8C.html"/>
<url>/2020/08/08/python2%E4%B8%ADformat%E5%92%8C%E7%99%BE%E5%88%86%E5%8F%B7%E6%8B%BC%E6%8E%A5%E5%AD%97%E7%AC%A6%E4%B8%B2%E7%9A%84%E5%BC%82%E5%90%8C.html</url>
<content type="html"><![CDATA[<h3 id="基础知识"><a href="#基础知识" class="headerlink" title="基础知识"></a>基础知识</h3><p>相信python2的编码问题大多数开发同学都遇到过,在出现非 ascii 编码字符时,就很容易编码异常的问题。python2的字符编码分为 str 以及 unicode,具体情况这里不再敖述,只会总结字符串拼接时应该注意的问题以及可能遇到的坑点。</p><p>以下几点常识是下面进一步讨论问题的基础:</p><ul><li>str转为unicode的过程,称为解码,即 decode。</li><li>unicode转为str,称为编码,即 encode。</li><li>使用<code>%</code>把str和unicode拼接,会自动隐式地把str转为unicode后,再进行拼接。(如果是fomat拼接呢?这里留个悬念,答案稍后揭晓)</li><li>当导入<strong>future</strong>包的unicode_literals特性时,python定义的字符都是unicode,而不是默认的str。这个也是为了让python2能够导入python3的特性,因为在python3中的str都是unicode。</li></ul><h3 id="拼接字符串"><a href="#拼接字符串" class="headerlink" title="% 拼接字符串"></a><code>%</code> 拼接字符串</h3><p>我们首先看看pyhon2中的使用<code>%</code>字符串拼接情况。从第一组结果来看,我们可以看到只要格式化串和字符串参数其中一个为unicode,最终结果就为unicode,这个和上面讲的第三点一致。在对str和unicode拼接的时候,会自动把str转为unicode,如第二组的中间两个结果。</p><p>但是我们需要注意编码的问题,如第2个结果,由于”中文”是非 acsii 编码,而且python解释器不知道其类型,会用ascii编码对其进行解码,相当于 <code>u"%s" % ("中文").decode("ascii")</code>,而ascii不认识非 0~127 编码所以就报错,当然我们可以手动指定用”utf-8”进行解码。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">>>> </span>type(<span class="string">"%s"</span> % (<span class="string">"hello"</span>))</span><br><span class="line"><type <span class="string">'str'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"%s"</span> % (<span class="string">"hello"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">"%s"</span> % (<span class="string">u"hello"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"%s"</span> % (<span class="string">u"hello"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line">>>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">"%s"</span> % (<span class="string">"中文"</span>))</span><br><span class="line"><type <span class="string">'str'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"%s"</span> % (<span class="string">"中文"</span>)) <span class="comment"># 最终结果为unicode,会隐式地通过ascii编码把"中文"解码为unicode</span></span><br><span class="line">Traceback (most recent call last):</span><br><span class="line"> File <span class="string">"<stdin>"</span>, line <span class="number">1</span>, <span class="keyword">in</span> <module></span><br><span class="line">UnicodeDecodeError: <span class="string">'ascii'</span> codec can<span class="string">'t decode byte 0xe4 in position 0: ordinal not in range(128)</span></span><br><span class="line"><span class="string">>>> type(u"%s" % ("中文".decode("utf-8")))</span></span><br><span class="line"><span class="string"><type '</span>unicode<span class="string">'></span></span><br><span class="line"><span class="string">>>> type("%s" % (u"中文"))</span></span><br><span class="line"><span class="string"><type '</span>unicode<span class="string">'></span></span><br><span class="line"><span class="string">>>> type(u"%s" % (u"中文"))</span></span><br><span class="line"><span class="string"><type '</span>unicode<span class="string">'></span></span><br></pre></td></tr></table></figure><h3 id="format-拼接字符串"><a href="#format-拼接字符串" class="headerlink" title="format 拼接字符串"></a><code>format</code> 拼接字符串</h3><p>同样的,我们先看下面的第一组结果,是不是有点吃惊?第一组的第二结果不是unicode类型,而是str类型,这个跟<code>%</code>是不同的。很显然从结果上看,我们知道对于<code>format</code>其拼接结果类型取决于其格式化串的类型,而与参数没有任何关系。</p><p>理解的第一组数据的规律后,再看第二组就知道为什么有的情况会报异常了。第二组的第二个结果,由于最终结果为str,pyhton解释器会默认用 ascii 对 <code>u"中文"</code> 进行编码,而第三个结果,由于最终结果为unicode,python解释器会默认用 ascii 对应 “中文” 进行解码,而报错的理由和前面的情况一致,都是因为ascii不认识非 0~127 编码。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">>>> </span>type(<span class="string">"{}"</span>.format(<span class="string">"hello"</span>))</span><br><span class="line"><type <span class="string">'str'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">"{}"</span>.format(<span class="string">u"hello"</span>))</span><br><span class="line"><type <span class="string">'str'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"{}"</span>.format(<span class="string">"hello"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"{}"</span>.format(<span class="string">u"hello"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line">>>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">"{}"</span>.format(<span class="string">"中文"</span>))</span><br><span class="line"><type <span class="string">'str'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">"{}"</span>.format(<span class="string">u"中文"</span>)) <span class="comment"># 最终结果为str,会隐式地通过ascii编码把u"中文"编码为ascii</span></span><br><span class="line">Traceback (most recent call last):</span><br><span class="line"> File <span class="string">"<stdin>"</span>, line <span class="number">1</span>, <span class="keyword">in</span> <module></span><br><span class="line">UnicodeEncodeError: <span class="string">'ascii'</span> codec can<span class="string">'t encode characters in position 0-1: ordinal not in range(128)</span></span><br><span class="line"><span class="string">>>> type("{}".format(u"中文".encode("utf-8")))</span></span><br><span class="line"><span class="string"><type '</span>st<span class="string">r'></span></span><br><span class="line"><span class="string">>>> type(u"{}".format("中文")) # 最终结果为unicode,会隐式地通过ascii编码把"中文"解码为unicdoe</span></span><br><span class="line"><span class="string">Traceback (most recent call last):</span></span><br><span class="line"><span class="string"> File "<stdin>", line 1, in <module></span></span><br><span class="line"><span class="string">UnicodeDecodeError: '</span>ascii<span class="string">' codec can'</span>t decode byte <span class="number">0xe4</span> <span class="keyword">in</span> position <span class="number">0</span>: ordinal <span class="keyword">not</span> <span class="keyword">in</span> range(<span class="number">128</span>)</span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"{}"</span>.format(<span class="string">"中文"</span>.decode(<span class="string">"utf-8"</span>)))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br><span class="line"><span class="meta">>>> </span>type(<span class="string">u"{}"</span>.format(<span class="string">u"中文"</span>))</span><br><span class="line"><type <span class="string">'unicode'</span>></span><br></pre></td></tr></table></figure><h3 id="坑点"><a href="#坑点" class="headerlink" title="坑点"></a>坑点</h3><p>而我们线上的问题,要比以上两种都要隐蔽,大致如下:</p><h4 id="代码目录结构"><a href="#代码目录结构" class="headerlink" title="代码目录结构"></a>代码目录结构</h4><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"> tree -L 2 -I "*.pyc"</span><br><span class="line">.</span><br><span class="line">├── test_module_002</span><br><span class="line">│ ├── __init__.py</span><br><span class="line">│ ├── __pycache__</span><br><span class="line">│ ├── main.py</span><br><span class="line">│ └── module_a.py</span><br></pre></td></tr></table></figure><a id="more"></a><h4 id="代码"><a href="#代码" class="headerlink" title="代码"></a>代码</h4><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># module_a.py</span></span><br><span class="line"><span class="keyword">from</span> __future__ <span class="keyword">import</span> unicode_literals</span><br><span class="line"></span><br><span class="line">variable_a = <span class="string">"mytag"</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># main.py</span></span><br><span class="line"><span class="comment"># -*- encoding=utf8 -*-</span></span><br><span class="line"><span class="keyword">import</span> six</span><br><span class="line"></span><br><span class="line"><span class="keyword">from</span> test_module_002 <span class="keyword">import</span> module_a</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> six.PY2:</span><br><span class="line"> variable_b = <span class="string">u"中文"</span>.encode(<span class="string">"utf-8"</span>)</span><br><span class="line"><span class="keyword">else</span>:</span><br><span class="line"> variable_b = <span class="string">u"中文"</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">"__main__"</span>:</span><br><span class="line"> print(<span class="string">"%s %s"</span> % (module_a.variable_a, variable_b))</span><br><span class="line"> print(<span class="string">"{} {}"</span>.format(module_a.variable_a, variable_b))</span><br><span class="line"> print(<span class="string">"%s %s"</span> % (module_a.variable_a.encode(<span class="string">"utf-8"</span>), variable_b))</span><br></pre></td></tr></table></figure><h4 id="分析"><a href="#分析" class="headerlink" title="分析"></a>分析</h4><p>我们一开始使用的是 <code>%</code> 方式进行拼接字符串,会报类似以下的错误。当时就意识到可能是中文编码问题,于是就尝试使用format,没想到居然成功解决了,但是当时不知道具体原因是什么。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">print "%s %s" % (module_a.variable_a, variable_b)</span><br><span class="line">UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128)</span><br></pre></td></tr></table></figure><p>但是经过前面的分析,现在回过头来看还是比较清晰的,由于format最终类型为str,所以第二个variable_b不需要解码,只需要把variable_a进行编码就行了。哎?variable_a为什么是unicode,它前面没有u,不应该是str吗?这个也是一大坑点,在引用开源代码时需要注意其模块是否引入了 unicode_literals 特性,如果引入了那么定义的字符就默认为unicode了。</p><p>可能有人会觉得,那我把variable_a用 utf-8 进行编码不就行了,这样不就只是两个str进行拼接吗?是的,这样在 <code>python2</code> 中的确可以,但是需要注意的是这样在 <code>python3</code> 中会加前缀b以及单引号,这样对一些匹配场景会有影响,如下结果所示。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">b'mytag' 中文</span><br></pre></td></tr></table></figure><p>所以,如果要兼容python2和python3的话,最佳解决办法还是使用format。python2后两种均能正常显示,python3前两种均能正常显示。当然,也可以在print之前通过 six 进行判断,如:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> six.PY2:</span><br><span class="line"> print(<span class="string">"%s %s"</span> % (module_a.variable_a.encode(<span class="string">"utf-8"</span>), variable_b))</span><br><span class="line"><span class="keyword">else</span>:</span><br><span class="line"> print(<span class="string">"%s %s"</span> % (module_a.variable_a, variable_b))</span><br></pre></td></tr></table></figure><blockquote><p>在编写代码时,从外面导入到系统的参数应该尽快转为unicode类型,在输出到外部时,应该转为str类型,这个也被称为 unicode sandwitch模型。</p></blockquote><h3 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h3><p>经过以上论述,我们知道了<code>%</code>和<code>format</code>的一些特性以及两者的异同点,分析了一个常见的坑点,并总结了在兼容python2和python3的场景下的方案。以下是本文要点:</p><ul><li>使用 <code>%</code> 进行拼接时,格式化串和字符串参数其中一个为unicode,最终结果就为unicode。</li><li>使用 <code>format</code> 进行拼接时,拼接结果类型取决于其格式化串的类型。</li><li>如果引入了unicode_literals特性,那么该模块定义的字符串均为unicode类型。</li></ul><h3 id="参考"><a href="#参考" class="headerlink" title="参考"></a>参考</h3><p><a href="https://stackoverflow.com/questions/21129020/how-to-fix-unicodedecodeerror-ascii-codec-cant-decode-byte" target="_blank" rel="noopener">How to fix: “UnicodeDecodeError: ‘ascii’ codec can’t decode byte”</a><br><a href="https://pyformat.info/#conversion_flags" target="_blank" rel="noopener">Using % and .format() for great good!</a><br><a href="https://nedbatchelder.com/text/unipain.html" target="_blank" rel="noopener">Pragmatic Unicode</a> ===> 重点推荐这个视频<br><a href="https://stackoverflow.com/questions/9644099/python-ascii-codec-cant-decode-byte" target="_blank" rel="noopener">Python - ‘ascii’ codec can’t decode byte</a><br><a href="https://anonbadger.wordpress.com/2016/01/05/python2-string-format-and-unicode/" target="_blank" rel="noopener">Python2, string .format(), and unicode</a> ===> 重点推荐这个文章 </p>]]></content>
<tags>
<tag> python2 </tag>
<tag> python3 </tag>
<tag> 编码 </tag>
</tags>
</entry>
<entry>
<title>flink学习系列之一: taskmanager, slot与parallelism</title>
<link href="/2020/07/12/flink%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80:%20taskmanager,%20slot%E4%B8%8Eparallelism.html"/>
<url>/2020/07/12/flink%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80:%20taskmanager,%20slot%E4%B8%8Eparallelism.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文讨论的内容均基于 flink 1.7.1</p></blockquote><blockquote><p>最近一段时间用 flink 写一些 etl 作业,做数据的收集清洗入库,也遇到一些性能问题需要进一步解决,于是计划学习部分flink底层知识。第一篇,跟以前学习spark一样,从flink的并行度说起。</p></blockquote><h2 id="flink作业的启动模式"><a href="#flink作业的启动模式" class="headerlink" title="flink作业的启动模式"></a>flink作业的启动模式</h2><p>通过 <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/deployment/yarn_setup.html" target="_blank" rel="noopener">flink YARN Setup</a> 文档我们能够了解到,flink的启动方式大致有两种,<br>一种是先分配jobmanager、taskmanager的资源,等待后续提交作业,另一种是在提交的时候申请资源并运行。下面将简单介绍一下这两种启动方式的区别,并着重关注其并行度的计算,最后和spark并行度的计算对对比。</p><h3 id="部署方式一:在yarn中启动一个flink-session,提交job到该session"><a href="#部署方式一:在yarn中启动一个flink-session,提交job到该session" class="headerlink" title="部署方式一:在yarn中启动一个flink session,提交job到该session"></a>部署方式一:在yarn中启动一个flink session,提交job到该session</h3><ul><li>启动flink session<ul><li>./bin/yarn-session.sh -tm 8192 -s 32</li><li>关键配置:<ul><li>-n,指定 container 数量(即taskmanager的数量,不过已经不建议使用,对应的<a href="https://github.com/apache/flink/blob/release-1.7.1/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java#L373" target="_blank" rel="noopener">源码</a> </li><li>-tm,分配 taskmanager 内存大小</li><li>-jm,分配 jobmanager 内存大小</li><li>-s,每个taskmanager分配slot个数(如果配置了将会覆盖yarn的 parallelism.default 配置,parallelism.default 值默认为1)</li><li>-Dyarn.containers.vcores,在yarn中分配的vcore个数,默认和slot个数一致,即一个slot一个vcore</li><li>默认 taskmanager 的数量为1,并行度为 slot * taskmanager ,<a href="https://github.com/apache/flink/blob/release-1.7.1/flink-yarn/src/main/java/org/apache/flink/yarn/cli/FlinkYarnSessionCli.java#L619" target="_blank" rel="noopener">源码</a></li></ul></li><li>一旦 flink session在yarn中启动成功,将会展示有关 jobmanager 连接的详细信息,通过CTRL+C 或者 在client中输入stop关闭 flink session</li></ul></li><li>提交job到该session<ul><li>./bin/flink run ./examples/batch/WordCount.jar </li><li>关键配置:<ul><li>-c,指定入口class</li><li>-m,指定jobmanager地址</li><li>-p,指定作业的并行度</li></ul></li><li>client能够自动识别对应的 jobmanager 地址</li><li>并行度的确定:<ul><li>如果不指定 -p ,则作业并行度为 1 (parallelism.default 的配置值,默认为1)</li><li>如果指定-p,则作业则在该session下,以 -p 指定值的并行度运行。如果作业的并行度大于session的并行度,则会报异常,作业启动失败。</li></ul></li></ul></li></ul><h3 id="部署方式二:在yarn中启动一个单独的作业"><a href="#部署方式二:在yarn中启动一个单独的作业" class="headerlink" title="部署方式二:在yarn中启动一个单独的作业"></a>部署方式二:在yarn中启动一个单独的作业</h3><ul><li>./bin/flink run -m yarn-cluster ./examples/batch/WordCount.jar</li><li>flink session的配置同样适用于启动单独的作业,需要加前缀 y 或者 yarn</li><li>关键配置:<ul><li>-n ,允许加载savepoint失败时启动程序</li><li>-d,client非阻塞模式启动作业</li><li>-p,指定作业并行度</li><li>-ytm,分配 taskmanager 内存大小</li><li>-yjm,分配 jobmanager 内存大小</li><li>-ys,指定每个taskmanager分配slot个数</li><li>-yn,指定container数量,和taskmanager数量一致</li></ul></li><li>并行度的确定<ul><li>如果指定了-m yarn-cluster,并且是 -d 或者 -yd 模式,不通过 -yid 指定 applicationid,则其并行度由 -p 决定。</li><li>flink会启动多少个taskmanager?我们知道flink作业的实际并行度是由 taskmanager * slot 决定的,默认情况下每个taskmanager的slot数量为1,所以yarn最终为了实现并行度为 -p 的作业,需要启动p个taskmanager。num( taskmanenger ) = p / slot </li></ul></li></ul><h2 id="spark-on-yarn-vs-flink-on-yarn"><a href="#spark-on-yarn-vs-flink-on-yarn" class="headerlink" title="spark on yarn vs. flink on yarn"></a>spark on yarn vs. flink on yarn</h2><blockquote><p>spark相关的executor以及并行的计算见 Spark学习系列之一和之二</p></blockquote><ul><li>executor vs. taskmanager<ul><li>spark submit 通过 –num-executors 控制executor数量</li><li>flink run 通过 -p 和 -ys 控制taskmanager数量</li></ul></li></ul><blockquote><p>另外spark on standalone模式下,其executor数量的计算方式和flink run差不多,它也是通过总的核数和每个executor核数反算所需的executor数目,可以把 total-executor-cores 类比 -p,executor-cores 类比 -ys)</p></blockquote><a id="more"></a><ul><li>executor core vs. slot<ul><li>spark submit 通过–executor-cores控制每个executor的core数量,在默认yarn资源调度器(DefaultResourceCalculator)的情况下,并不能保证每个executor实际分配到的core为指定值,但是每个executor会依然认为自己有指定个core,类似于cpu的超卖。</li><li>flink run 中,一个作业的slot总数即为其最大的并行度,而每个slot可以通过 yarn.containers.vcores 配置实际分配到的vcore数量。</li></ul></li></ul><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>可以看出 flink 的并行度要比 spark 灵活,它可以通过taskmanger, slot, 算子设置并行度决定实际的运行的并行度。不过这样会导致flink上手难度可能会更高,而一个taskmanager的内存会被slot平均分配,<br>也进一步给作业带来不稳定性。</p><p>参考:<br><a href="https://zhuanlan.zhihu.com/p/92721430" target="_blank" rel="noopener">flink的slot 和parallelism</a><br><a href="https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/deployment/yarn_setup.html" target="_blank" rel="noopener">flink YARN Setup</a><br><a href="https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/config.html" target="_blank" rel="noopener">flink Configuration</a><br><a href="https://juejin.im/post/5bf8dd7a51882507e94b8b15" target="_blank" rel="noopener">Flink 集群运行原理兼部署及Yarn运行模式深入剖析-Flink牛刀小试</a><br><a href="https://github.com/apache/flink/blob/release-1.7.1/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java" target="_blank" rel="noopener">flink 单独运行作业源码</a> </p><blockquote><p>本文为学习过程中产生的总结,由于学艺不精可能有些观点或者描述有误,还望各位同学帮忙指正,共同进步。</p></blockquote>]]></content>
<tags>
<tag> spark </tag>
<tag> flink </tag>
</tags>
</entry>
<entry>
<title>es学习系列之一:Rollover Index VS. Index Lifecycle Management</title>
<link href="/2020/05/17/es%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80%EF%BC%9ARollover-Index-VS-Index-Lifecycle-Management.html"/>
<url>/2020/05/17/es%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80%EF%BC%9ARollover-Index-VS-Index-Lifecycle-Management.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文讨论的内容均基于 es 7.*</p></blockquote><h2 id="es的Rollover索引"><a href="#es的Rollover索引" class="headerlink" title="es的Rollover索引"></a>es的Rollover索引</h2><p>es的Rollover索引通常指的是一个别名指向某个索引,并且能够在索引的某些条件下进行轮转,如索引的创建时间长短、大小、文档数量。</p><p>如创建一个名为 nginx_log-000001 的索引,并指定其alias为nginx_log_write,并且我们对nginx_log_write写入3个文档(其实也是对nginx_log-000001写)。然后对别名调用rollover接口,<br>由于已经达到文档数目为3的条件,则会自动生成 nginx_log-000002 的索引。这时对nginx_log_write写入会自动写入到nginx_log-000002索引中。</p><p>需要注意的是,由于对索引设置alias的时候,没有添加 <code>"is_write_index": true</code> 配置,则在执行rollover并创建新索引成功后,将会只指向<strong>一个</strong>索引(新索引),对nginx_log_write查询只能查到最新索引的数据,而不能查到历史数据。相反,如果配置了<code>"is_write_index": true</code>,在rollover后alias会<strong>同时</strong>指向多个索引,并且最新索引设置为<code>"is_write_index": true</code>,旧索引设置为<code>"is_write_index": false</code>,对alias的<br>写入就是对最新索引的写入,查询时是对所有索引进行<strong>查询</strong>。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br></pre></td><td class="code"><pre><span class="line"># 创建索引nginx_log-000001,并设置其别名为nginx_log_write</span><br><span class="line">PUT /nginx_log-000001</span><br><span class="line">{</span><br><span class="line"> "aliases": {</span><br><span class="line"> "nginx_log_write": {</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 对别名写入文档,重复执行3次</span><br><span class="line">POST nginx_log_write/_doc</span><br><span class="line">{</span><br><span class="line"> "log":"something before rollover"</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 对别名执行rollover</span><br><span class="line">POST /nginx_log_write/_rollover</span><br><span class="line">{</span><br><span class="line"> "conditions": {</span><br><span class="line"> "max_age": "1d",</span><br><span class="line"> "max_docs": 3,</span><br><span class="line"> "max_size": "5gb"</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 对新索引插入新数据</span><br><span class="line">POST nginx_log_write/_doc</span><br><span class="line">{</span><br><span class="line"> "log":"something after rollover"</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 分别查 nginx_log-000001、nginx_log-000002、nginx_log_write,对ginx_log_write只能查到最新索引的数据</span><br><span class="line">POST nginx_log_write/_search</span><br><span class="line">{</span><br><span class="line"> "query":{</span><br><span class="line"> "match_all": {</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 对非 "is_write_index": true 模式的索引,可用 index_name-* 查询所有数据</span><br><span class="line">POST nginx_log-*/_search</span><br><span class="line">{</span><br><span class="line"> "query":{</span><br><span class="line"> "match_all": {</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>另外,我们可以利用 <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/date-math-index-names.html" target="_blank" rel="noopener">Date Math</a> 创建带日期的rollover索引,更加方便索引管理。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"># PUT /<nginx_log-{now/d}-000001>,将创建名为 nginx_log-2020.05.17-000001 的索引</span><br><span class="line">PUT /%3Cnginx_log-%7Bnow%2Fd%7D-000001%3E</span><br><span class="line">{</span><br><span class="line"> "aliases": {</span><br><span class="line"> "nginx_log_write": {</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>需要注意的是 <code>_rollover</code> api只会对调用该接口的那个时刻有效,当然可以自己独立做一个任务周期性扫描所有别名,当别名到达一定条件后就调用其 <code>_rollover</code> 接口。如果需要es自身定时调用的话,可以使用自动化程度更高的 Index Lifecycle Management。</p><h2 id="Index-Lifecycle-Management"><a href="#Index-Lifecycle-Management" class="headerlink" title="Index Lifecycle Management"></a>Index Lifecycle Management</h2><p>与 _rollover 索引相比,索引生命周期管理会更加自动化,ILM把索引的生命周期分为4个phase,分别为Hot、Warm、Cold、Delete。每个phase可以包含多个action。</p><table><thead><tr><th>action</th><th>允许该action的phase</th><th>action意义</th></tr></thead><tbody><tr><td>Rollover</td><td>hot</td><td>和 rollover 索引的条件一致</td></tr><tr><td>Read-Only</td><td>warm</td><td>通过<code>"index.blocks.write": false</code> 把原索引会被设置只读</td></tr><tr><td>Allocation</td><td>warm, cold</td><td>移动索引时指定的亲和性规则,包括include, exclude, require。同时还可以通过 number_of_replicas 变更副本数量,比如指定为0。</td></tr><tr><td>Shrink</td><td>warm</td><td>合并shard,创建 shrink-${origin_index_name},前提是需要把原索引的shard移动到同一个node上,需要留意node是否有足够的容量。并且会通过<code>"index.blocks.write": false</code> 把原索引会被设置只读,并最终删除原索引。</td></tr><tr><td>Force Merge</td><td>warm</td><td>合并segmemt。和shrink一样,会通过<code>"index.blocks.write": false</code> 把原索引会被设置只读</td></tr><tr><td>Freeze</td><td>cold</td><td>冻结索引。适用于很少查询的旧索引,es通过冻结索引能够减少堆内存的使用</td></tr><tr><td>Delete</td><td>delete</td><td>删除索引</td></tr><tr><td>Set Priority</td><td>hot, warm, cold</td><td>重启时,恢复索引的优先度,值越大越优先恢复</td></tr><tr><td>Unfollow</td><td>hot,warm,cold</td><td>应该是中间状态</td></tr></tbody></table><a id="more"></a><p>创建一个名为 my_policy 的索引周期管理策略,设置以下几个方面</p><ul><li>如果索引文档超过10个则进行rollover,创建新索引。</li><li>原索引立刻移动到boxtype=warm的机器,然后创建名为 shrink_${index_name}的索引,同时把shard合并为1个(所有shard都会移动到同一个node),并创建原索引同名alias以及删除原索引,。</li><li>原索引被rollover 1h后,移动到boxtype=cold的机器。</li><li>原索引被rollover 2h后,直接被删除。</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br></pre></td><td class="code"><pre><span class="line"># 创建mypolicy</span><br><span class="line">PUT /_ilm/policy/my_policy</span><br><span class="line">{</span><br><span class="line"> "policy": {</span><br><span class="line"> "phases": {</span><br><span class="line"> "hot": {</span><br><span class="line"> "actions": {</span><br><span class="line"> "rollover": {</span><br><span class="line"> "max_docs": 10</span><br><span class="line"> },</span><br><span class="line"> "set_priority": {</span><br><span class="line"> "priority": 100</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "warm": {</span><br><span class="line"> "actions": {</span><br><span class="line"> "allocate": {</span><br><span class="line"> "require": {</span><br><span class="line"> "box_type": "warm"</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "shrink": {</span><br><span class="line"> "number_of_shards": 1</span><br><span class="line"> },</span><br><span class="line"> "set_priority": {</span><br><span class="line"> "priority": 50</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "cold": {</span><br><span class="line"> "min_age": "1h",</span><br><span class="line"> "actions": {</span><br><span class="line"> "allocate": {</span><br><span class="line"> "require": {</span><br><span class="line"> "box_type": "cold"</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "set_priority": {</span><br><span class="line"> "priority": 0</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "delete": {</span><br><span class="line"> "min_age": "2h",</span><br><span class="line"> "actions": {</span><br><span class="line"> "delete": {}</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 设置索引模版</span><br><span class="line">PUT /_template/log_ilm_template</span><br><span class="line">{</span><br><span class="line"> "index_patterns" : [</span><br><span class="line"> "nginx_log-*"</span><br><span class="line"> ],</span><br><span class="line"> "settings" : {</span><br><span class="line"> "index" : {</span><br><span class="line"> "lifecycle" : {</span><br><span class="line"> "name" : "my_policy",</span><br><span class="line"> "rollover_alias" : "nginx_log_write"</span><br><span class="line"> },</span><br><span class="line"> "routing" : {</span><br><span class="line"> "allocation" : {</span><br><span class="line"> "require" : {</span><br><span class="line"> "box_type" : "hot"</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "number_of_shards" : "2",</span><br><span class="line"> "number_of_replicas" : "0"</span><br><span class="line"> }</span><br><span class="line"> },</span><br><span class="line"> "mappings" : { },</span><br><span class="line"> "aliases" : { }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"># 创建带时间的索引,并指定其别名为 nginx_log_write,并指定其只能分配在botx_type为hot的节点</span><br><span class="line"># PUT /<nginx_log-{now/d}-000001>,将创建名为 nginx_log-2020.05.17-000001 的索引</span><br><span class="line">PUT /%3Cnginx_log-%7Bnow%2Fd%7D-000001%3E</span><br><span class="line">{</span><br><span class="line"> "settings": {</span><br><span class="line"> "index.routing.allocation.include.box_type":"hot"</span><br><span class="line"> },</span><br><span class="line"> "aliases": {</span><br><span class="line"> "nginx_log_write": {</span><br><span class="line"> "is_write_index": true</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 写入11个doc,最多等待10分钟机会索引就会被rollover</span><br><span class="line"># 1.等待10分钟后可以发现 nginx_log-2020.05.17-000002 的索引被创建</span><br><span class="line"># 2.创建了索引shrink-nginx_log-2020.05.21-000001,shard被shrink为1个,并删除原来的索引 nginx_log-2020.05.17-000001 </span><br><span class="line"># 3.创建别名nginx_log-2020.05.21-000001指向索引shrink-nginx_log-2020.05.21-000001</span><br><span class="line">POST nginx_log_write/_doc</span><br><span class="line">{</span><br><span class="line"> "log":"something 01"</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"># 如果觉得太久,可以ilm设置60秒刷新1次,默认为10分钟刷新一次</span><br><span class="line">PUT _cluster/settings</span><br><span class="line">{</span><br><span class="line"> "persistent": {</span><br><span class="line"> "indices.lifecycle.poll_interval":"60s"</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>使用 ILM 的注意点:</p><ul><li>创建索引时,设置alias需要指定<code>"is_write_index": true</code></li><li>在设置index template时,在特settings中的 <code>"index.lifecycle.rollover_alias"</code> 设置的别名要和创建索引时指定的别名一致。</li><li>index template中的 routing.allocation.${condiction} 最好和 ILM 中allocate指定的一致(即同时使用require,或者同时使用include)。因为这样才不会导致旧索引无法被已到新节点。比如index template指定 <strong>include</strong> hot,ILM warm中allocate中指定 <strong>require</strong> warm,那么在hot阶段rollover后进入到warm阶段的allocate,可能会导致将会无法移动索引,因为无法找到一个节点同时满足hot和warm节点。但是如果同时为include或者require,则ILM时会覆盖template设置的条件,索引可以成功移动。</li></ul><h3 id="Rollover-Index-VS-Index-Lifecycle-Management"><a href="#Rollover-Index-VS-Index-Lifecycle-Management" class="headerlink" title="Rollover Index VS. Index Lifecycle Management"></a>Rollover Index VS. Index Lifecycle Management</h3><table><thead><tr><th style="text-align:center"></th><th style="text-align:center">自动调用</th><th style="text-align:center">alias必须设置为write模式</th><th style="text-align:center">alias名称限定</th><th style="text-align:center">支持时间序列索引</th><th style="text-align:center">支持移动索引</th></tr></thead><tbody><tr><td style="text-align:center">Rollover Index</td><td style="text-align:center">否</td><td style="text-align:center">可选</td><td style="text-align:center">否</td><td style="text-align:center">是</td><td style="text-align:center">否</td></tr><tr><td style="text-align:center">Index Lifecycle Management</td><td style="text-align:center">是,间隔为 indices.lifecycle.poll_interval</td><td style="text-align:center">是</td><td style="text-align:center">需要和”index.lifecycle.rollover_alias”同名</td><td style="text-align:center">是</td><td style="text-align:center">是,需要注意index template中和ILM中使用同等类型的限制</td></tr></tbody></table><h3 id="ILM-min-age-vs-rollover-max-age"><a href="#ILM-min-age-vs-rollover-max-age" class="headerlink" title="ILM min_age vs. rollover max_age"></a>ILM min_age vs. rollover max_age</h3><p>我们需要对索引声明周期中的 min_age 和 rollover 中的 max_age做一下区分。我们知道,除了直接的rollover接口外,其实ILM中也是存在rollover的,如上所述它存在三个条件,包括 max_size, max_docs, max_age,其中 max_age 针对的是索引的 <strong>创建时间</strong>。<br>ILM的各个phase之间存在间隔,它通过min_age定义,比如上面的1小时以及2小时,它针对的是索引的 <strong>创建时间</strong> 或者 <strong>rollover时间</strong>。如果上个phase的index不是rollover来的,那么它指的是索引创建时间;否则,它指的是rollover时间(比如在没有hot phase不则rollover,那么在warm定义的min_age意义为索引创建时间)。<br>官方文档也有详细的解释:</p><blockquote><p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/using-policies-rollover.html" target="_blank" rel="noopener">https://www.elastic.co/guide/en/elasticsearch/reference/7.2/using-policies-rollover.html</a><br>Once an index rolls over, index lifecycle management uses the timestamp of the rollover operation rather than the index creation time to evaluate when to move the index to the next phase. For indices that have rolled over, the min_age criteria specified for a phase is relative to the rollover time for indices. In this example, that means the index will be deleted 30 days after rollover, not 30 days from when the index was created.</p></blockquote><blockquote><p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.2/_timing.html" target="_blank" rel="noopener">https://www.elastic.co/guide/en/elasticsearch/reference/7.2/_timing.html</a><br>min_age is usually the time elapsed from the time the index is created. If the index is rolled over, then min_age is the time elapsed from the time the index is rolled over. The intention here is to execute following phases and actions relative to when data was written last to a rolled over index.</p></blockquote><p>参考:<br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.6/indices-rollover-index.html" target="_blank" rel="noopener">Rollover index API</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.6/_actions.html" target="_blank" rel="noopener">Actions</a><br><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.6/ilm-actions.html" target="_blank" rel="noopener">Index lifecycle actions</a> </p><blockquote><p>本文为学习过程中产生的总结,由于学艺不精可能有些观点或者描述有误,还望各位同学帮忙指正,共同进步。</p></blockquote>]]></content>
<tags>
<tag> elasticsearch </tag>
</tags>
</entry>
<entry>
<title>Spark学习系列之三:join的宽依赖vs.窄依赖</title>
<link href="/2020/01/02/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%89%EF%BC%9Ajoin%E7%9A%84%E5%AE%BD%E4%BE%9D%E8%B5%96vs-%E7%AA%84%E4%BE%9D%E8%B5%96.html"/>
<url>/2020/01/02/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%89%EF%BC%9Ajoin%E7%9A%84%E5%AE%BD%E4%BE%9D%E8%B5%96vs-%E7%AA%84%E4%BE%9D%E8%B5%96.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文源码版本为 spark 2.3.4<br>两个rdd join时产生新的rdd,是宽依赖,还是窄依赖?</p></blockquote><h2 id="join-transformation"><a href="#join-transformation" class="headerlink" title="join transformation"></a>join transformation</h2><p><img src="/2020/01/02/Spark学习系列之三:join的宽依赖vs-窄依赖/narrow_wide_dependency.png" alt="narrow_wide_dependency.png"></p><p>以上图片是个经常用来解释宽窄依赖的经典图,来源于论文<<Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing>>。以下这段话也来自与该论文:</p><blockquote><p>join: Joining two RDDs may lead to either two nar- row dependencies (if they are both hash/range partitioned with the same partitioner), two wide dependencies, or a mix (if one parent has a partitioner and one does not). In either case, the output RDD has a partitioner (either one inherited from the parents or a default hash partitioner)</p></blockquote><p>或许我们会好奇,为什么同样是join操作,有时是宽依赖,有时窄依赖?我们先从两个简单的实验开始,再从源码看其实现方式。</p><h3 id="rdd1和rdd2的partitioner不同"><a href="#rdd1和rdd2的partitioner不同" class="headerlink" title="rdd1和rdd2的partitioner不同"></a>rdd1和rdd2的partitioner不同</h3><p>假设我们有rdd1和rdd2,其partitioner分别为partitioner1、partitioner2。分区器定义如下: </p><p>partitioner1:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">numPartiton = 3</span><br><span class="line">func = x mod numPartiton</span><br></pre></td></tr></table></figure><p>partitioner2:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">numPartiton = 5</span><br><span class="line">func = (x * 3) mod numPartiton</span><br></pre></td></tr></table></figure><p>rdd1的初始分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, "a"), (3, "e")</span><br><span class="line">partition1: (1, "b")</span><br><span class="line">partition2: (2, "c")</span><br></pre></td></tr></table></figure><p>rdd2的初始分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, "e"), (0, "j")</span><br><span class="line">partition1: (2, "f")</span><br><span class="line">partition2: (4, "g")</span><br><span class="line">partition3: (1, "h"), (6, "k")</span><br><span class="line">partition4: (3, "i")</span><br></pre></td></tr></table></figure><p>rdd3=rdd2.join(rdd1),rdd3数据分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, ("e", "a")), (0, ("j", "a"))</span><br><span class="line">partition1: (2, ("f", "c"))</span><br><span class="line">partition2: </span><br><span class="line">partition3: (1, ("h", "b"))</span><br><span class="line">partition4: (3, ("i", "e"))</span><br></pre></td></tr></table></figure><p>rdd3和rdd1以及rdd2的parittion之间的依赖关系如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">rdd1.partition0 ==> rdd3.partition0, rdd3.partition4 </span><br><span class="line">rdd1.partition1 ==> rdd3.partition3 </span><br><span class="line">rdd1.partition2 ==> rdd3.partition2 </span><br><span class="line"></span><br><span class="line">rdd2.partition0 ==> rdd3.partition0 </span><br><span class="line">rdd2.partition1 ==> rdd3.partition1 </span><br><span class="line">rdd2.partition2 ==> rdd3.partition2 </span><br><span class="line">rdd2.partition3 ==> rdd3.partition3 </span><br><span class="line">rdd2.partition4 ==> rdd3.partition4</span><br></pre></td></tr></table></figure><p>可以看到rdd1的parittion0 同时被rdd3的partition0和partition4依赖,父rdd的一个parittion被子rdd多个parittion依赖,所以此时rdd3对rdd1的依赖为宽依赖,而对rdd2为窄依赖。</p><a id="more"></a><h3 id="rdd1和rdd2的partitioner相同"><a href="#rdd1和rdd2的partitioner相同" class="headerlink" title="rdd1和rdd2的partitioner相同"></a>rdd1和rdd2的partitioner相同</h3><p>我们统一rdd1和rdd2的partitioner,再观察其依赖状态。</p><p>partitioner:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">numPartiton = 3</span><br><span class="line">func = x mod numPartiton</span><br></pre></td></tr></table></figure><p>rdd1的初始分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, "a"), (3, "e")</span><br><span class="line">partition1: (1, "b")</span><br><span class="line">partition2: (2, "c")</span><br></pre></td></tr></table></figure><p>rdd2的初始分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, "e"), (0, "j"), (3, "i"), (6, "k")</span><br><span class="line">partition1: (1, "h"), (4, "g")</span><br><span class="line">partition2: (2, "f")</span><br></pre></td></tr></table></figure><p>rdd3=rdd2.join(rdd1),rdd3数据分布如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">partition0: (0, ("e", "a")), (0, ("j","a")), (3, ("i", "e"))</span><br><span class="line">partition1: (1, ("h", "b"))</span><br><span class="line">partition2: (2, ("f", "c"))</span><br></pre></td></tr></table></figure><p>rdd3和rdd1以及rdd2的parittion之间的依赖关系如下:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">rdd1.partition0 ==> rdd3.partition0 </span><br><span class="line">rdd1.partition1 ==> rdd3.partition1 </span><br><span class="line">rdd1.partition2 ==> rdd3.partition2 </span><br><span class="line">rdd1.partition3 ==> rdd3.partition3 </span><br><span class="line"></span><br><span class="line">rdd2.partition0 ==> rdd3.partition0 </span><br><span class="line">rdd2.partition1 ==> rdd3.partition1 </span><br><span class="line">rdd2.partition2 ==> rdd3.partition2 </span><br><span class="line">rdd2.partition3 ==> rdd3.partition3</span><br></pre></td></tr></table></figure><p>rdd1和rdd2的每个parittion都只被rdd3的一个partition依赖,故rdd3对rdd1和rdd2的依赖为窄依赖。</p><h3 id="小结"><a href="#小结" class="headerlink" title="小结"></a>小结</h3><p>通过对比两种情况,可以发现当两个父rdd的partitioner相同时,根本不会发生partition间的传输。这也是合理的,因为子rdd根据key进行计算分区时,也会和当前所在分区一致。parittioner不同时,其中一个至少会发生shuffle。当两个父rdd均没有parittioner时,将会进行两次shuffle。</p><h2 id="分析源码实现"><a href="#分析源码实现" class="headerlink" title="分析源码实现"></a>分析源码实现</h2><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> org.apache.spark.rdd.<span class="type">PairRDDFunctions</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Return an RDD containing all pairs of elements with matching keys in `this` and `other`. Each</span></span><br><span class="line"><span class="comment"> * pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in `this` and</span></span><br><span class="line"><span class="comment"> * (k, v2) is in `other`. Performs a hash join across the cluster.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">join</span></span>[<span class="type">W</span>](other: <span class="type">RDD</span>[(<span class="type">K</span>, <span class="type">W</span>)]): <span class="type">RDD</span>[(<span class="type">K</span>, (<span class="type">V</span>, <span class="type">W</span>))] = self.withScope {</span><br><span class="line"> join(other, defaultPartitioner(self, other))</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Return an RDD containing all pairs of elements with matching keys in `this` and `other`. Each</span></span><br><span class="line"><span class="comment"> * pair of elements will be returned as a (k, (v1, v2)) tuple, where (k, v1) is in `this` and</span></span><br><span class="line"><span class="comment"> * (k, v2) is in `other`. Uses the given Partitioner to partition the output RDD.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">join</span></span>[<span class="type">W</span>](other: <span class="type">RDD</span>[(<span class="type">K</span>, <span class="type">W</span>)], partitioner: <span class="type">Partitioner</span>): <span class="type">RDD</span>[(<span class="type">K</span>, (<span class="type">V</span>, <span class="type">W</span>))] = self.withScope {</span><br><span class="line"> <span class="keyword">this</span>.cogroup(other, partitioner).flatMapValues( pair =></span><br><span class="line"> <span class="keyword">for</span> (v <- pair._1.iterator; w <- pair._2.iterator) <span class="keyword">yield</span> (v, w)</span><br><span class="line"> )</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * For each key k in `this` or `other`, return a resulting RDD that contains a tuple with the</span></span><br><span class="line"><span class="comment"> * list of values for that key in `this` as well as `other`.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">cogroup</span></span>[<span class="type">W</span>](other: <span class="type">RDD</span>[(<span class="type">K</span>, <span class="type">W</span>)], partitioner: <span class="type">Partitioner</span>)</span><br><span class="line"> : <span class="type">RDD</span>[(<span class="type">K</span>, (<span class="type">Iterable</span>[<span class="type">V</span>], <span class="type">Iterable</span>[<span class="type">W</span>]))] = self.withScope {</span><br><span class="line"> <span class="keyword">if</span> (partitioner.isInstanceOf[<span class="type">HashPartitioner</span>] && keyClass.isArray) {</span><br><span class="line"> <span class="keyword">throw</span> <span class="keyword">new</span> <span class="type">SparkException</span>(<span class="string">"HashPartitioner cannot partition array keys."</span>)</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">val</span> cg = <span class="keyword">new</span> <span class="type">CoGroupedRDD</span>[<span class="type">K</span>](<span class="type">Seq</span>(self, other), partitioner)</span><br><span class="line"> cg.mapValues { <span class="keyword">case</span> <span class="type">Array</span>(vs, w1s) =></span><br><span class="line"> (vs.asInstanceOf[<span class="type">Iterable</span>[<span class="type">V</span>]], w1s.asInstanceOf[<span class="type">Iterable</span>[<span class="type">W</span>]])</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line">--------------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.rdd.<span class="type">CoGroupedRDD</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">getDependencies</span></span>: <span class="type">Seq</span>[<span class="type">Dependency</span>[_]] = {</span><br><span class="line"> rdds.map { rdd: <span class="type">RDD</span>[_] =></span><br><span class="line"> <span class="keyword">if</span> (rdd.partitioner == <span class="type">Some</span>(part)) {</span><br><span class="line"> logDebug(<span class="string">"Adding one-to-one dependency with "</span> + rdd)</span><br><span class="line"> <span class="keyword">new</span> <span class="type">OneToOneDependency</span>(rdd)</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> logDebug(<span class="string">"Adding shuffle dependency with "</span> + rdd)</span><br><span class="line"> <span class="keyword">new</span> <span class="type">ShuffleDependency</span>[<span class="type">K</span>, <span class="type">Any</span>, <span class="type">CoGroupCombiner</span>](</span><br><span class="line"> rdd.asInstanceOf[<span class="type">RDD</span>[_ <: <span class="type">Product2</span>[<span class="type">K</span>, _]]], part, serializer)</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br></pre></td></tr></table></figure><p>考虑rdd3=rdd1.join(rdd2),什么时候会进行shuffle,什么时候不会?我们先看看rdd3=rdd1.join(rdd2) 调用栈:</p><ol><li>rdd1通过隐式转换为PairRDDFunctions(通过rddToPairRDDFunctions进行隐式转换)</li><li>调用<code>org.apache.spark.rdd.PairRDDFunctions</code>的<code>join[W](other: RDD[(K, W)])</code>方法</li><li>使用 defaultPartitioner(self, other) 获取或者创建partitioner,并调用<code>join(other, defaultPartitioner(self, other))</code>方法,defaultPartitioner的计算方法见上篇文章<code>Spark学习系列之二:rdd分区数量分析</code>。</li><li>通过CoGroupedRDD[K](Seq(self, other), partitioner)创建rdd,聚合父rdd相同key到一个长度为二的数组中,每个数组的类型为Iterable,即 RDD[(K, (Iterable[V], Iterable[W]))]。</li><li>在CoGroupedRDD的getDependencies中,rdds为rdd1和rdd2组成的元组,rdds.map遍历该元组: <ul><li>元组中的的parittioner和defaultPartitioner返回相等时(parittioner是否相等取决其实现的equal方法,如HashPartitioner实现的equal方法只有同种类型的Partitioner,并且分区数量一致时,返回true),返回OneToOneDependency。它是NarrowDependency的实现类,代表子rdd的一个paritition只依赖父rdd的一个parittion,其实现了getParents(partitionId: Int)方法,可以根据子rdd的partitionId获取依赖父rdd的partitionId的List,并且返回的List大小为1.</li><li>如果不相等,则返回ShuffleDependency。</li></ul></li><li>综上所述,可能出现四种情况<ul><li>rdd3与rdd1和rdd2均为窄依赖,rdd1和rdd2的partitioner与defaultPartitioner()返回的相等。</li><li>rdd3与rdd1和rdd2均为宽依赖,rdd1和rdd2的partitioner与defaultPartitioner()返回的不相等,或者rdd1和rdd2的partitioner不存在。</li><li>rdd3与rdd1为窄依赖,与rdd2为宽依赖,rdd1和与defaultPartitioner()返回的相等,rdd2和与defaultPartitioner()返回的不相等,或者rdd2的partitioner不存在。</li><li>rdd3与rdd2为窄依赖,与rdd1为宽依赖,类似第3条</li></ul></li></ol><h2 id="如何知道是否会发生shuffle"><a href="#如何知道是否会发生shuffle" class="headerlink" title="如何知道是否会发生shuffle"></a>如何知道是否会发生shuffle</h2><p>一般有两个办法:</p><ul><li>通过dependencies,查看rdd的依赖,如果为OneToOneDenpendency、PruneDependency、RangeDependency则为窄依赖,如果为ShuffleDependency则为宽依赖。</li><li>通过toDebugString查看血统中是否有ShuffledRDD。</li></ul><p>如:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">val wordsRDD = sc.parallelize(largeList)</span><br><span class="line"></span><br><span class="line">/* dependencies */</span><br><span class="line"></span><br><span class="line">val pairs = wordsRdd.map(c=>(c,1))</span><br><span class="line"> .groupByKey</span><br><span class="line"> .dependencies // <-------------</span><br><span class="line">// pairs: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@4294a23d)</span><br><span class="line"></span><br><span class="line">/* toDebugString */</span><br><span class="line"></span><br><span class="line">val pairs = wordsRdd.map(c=>(c,1))</span><br><span class="line"> .groupByKey</span><br><span class="line"> .toDebugString // <-------------</span><br><span class="line">// pairs: String =</span><br><span class="line">// (8) ShuffledRDD[219] at groupByKey at <console>:38 []</span><br><span class="line">// +-(8) MapPartitionsRDD[218] at map at <console>:37 []</span><br><span class="line">// | ParallelCollectionRDD[217] at parallelize at <console>:36 []</span><br></pre></td></tr></table></figure><p>另外一个办法是记住 <strong>可能</strong> 会发生shuffle的transformation:</p><ul><li>cogroup</li><li>groupWith</li><li>join</li><li>leftOuterJoin</li><li>rightOuterJoin</li><li>groupByKey</li><li>reduceByKey</li><li>combineByKey</li><li>distinct</li><li>intersection</li><li>repartition</li><li>coalesce</li><li>partitionBy</li><li>sortByKey</li><li>sortBy</li></ul><h2 id="使用分区可以避免shuffle的常见场景"><a href="#使用分区可以避免shuffle的常见场景" class="headerlink" title="使用分区可以避免shuffle的常见场景"></a>使用分区可以避免shuffle的常见场景</h2><ol><li><p>运行在 <strong>预分区</strong> RDD上的reduceByKey将只会在本地计算值,只需要将最终的reduced值从worker发送到dirver。</p></li><li><p>在两个RDD上调用的join,这些RDD使用相同的分区器进行 <strong>预分区</strong> 并 <strong>缓存</strong> 在同一台计算机上,这将导致只在本地计算join,而不会在网络上进行shuffle。</p></li></ol><h2 id="对常见的transformation进行分类"><a href="#对常见的transformation进行分类" class="headerlink" title="对常见的transformation进行分类"></a>对常见的transformation进行分类</h2><p>可能会对transformation的种类繁多有点难记,有些会保持partitioner、有些不保持partitioner、有些会shuffle、有些不会shuffle。我们可以根据这两个维度对常见的transformation进行划分。</p><ul><li>有根据key移动的需求,可能会shuffle(除非已经根据paritioner分区过了);不会改变key并且保持原有的分区<ul><li>cogroup</li><li>groupWith</li><li>join</li><li>leftOuterJoin</li><li>rightOuterJoin</li><li>groupByKey</li><li>reduceByKey</li><li>combineByKey</li><li>partitionBy</li><li>sortByKey</li></ul></li><li>有根据key移动的需求,可能会shuffle(除非已经根据paritioner分区过了);会改变key并且不保持原有的分区<ul><li>distinct</li><li>intersection</li><li>repartition</li><li>coalesce</li><li>sortBy</li></ul></li><li>没有根据key移动的需求,不会shuffle;不会改变key并且保持原有的分区<ul><li>foldByKey</li><li>mapValues</li><li>flatMapValues</li><li>filter</li></ul></li><li>没有根据key移动的需求,不会shuffle;会改变key,不保持原有的分区<ul><li>map </li><li>flatmap</li></ul></li></ul><p>对于进行shuffle后的rdd,再需要被使用时需要进行cache就不多描述了。这里我们需要考虑的另一个问题是,shuffle后的rdd进行丢失分区的transformation会怎样?即从第1种转换到第2、4种,从第2中转换到第2、4种。<br>从1转到2,以及从2转到2可能不会有什么问题,以为这个转换依然会进行shuffle,视情况进行cache就行。但是对于从1到4,以及从2到4,我们的分区信息就丢失了</p><table><thead><tr><th style="text-align:center">x -> y</th><th style="text-align:center">shuffle, keep key</th><th style="text-align:center">shuffle, change key</th><th style="text-align:center">no shuffle, keep key</th><th style="text-align:center">no shuffle, change key</th></tr></thead><tbody><tr><td style="text-align:center">shuffle, keep key</td><td style="text-align:center">/</td><td style="text-align:center">/</td><td style="text-align:center">√</td><td style="text-align:center">X</td></tr><tr><td style="text-align:center">shuffle, change key</td><td style="text-align:center">/</td><td style="text-align:center">/</td><td style="text-align:center">√</td><td style="text-align:center">X</td></tr><tr><td style="text-align:center">no shuffle, keep key</td><td style="text-align:center">/</td><td style="text-align:center">/</td><td style="text-align:center">O</td><td style="text-align:center">O</td></tr><tr><td style="text-align:center">no shuffle, changekey</td><td style="text-align:center">/</td><td style="text-align:center">/</td><td style="text-align:center">O</td><td style="text-align:center">O</td></tr></tbody></table><p>“/“ 表示只要视情况对y进行cache;”O”表示正常的转化,一般不需要cache;”√”表示正常的转化,视情况对x进行cache;”X”表示将丢失分区信息,如果y后面的transformation z需要进行shuffle,那么将不得不重新shuffle,如果z不需要shuffle则不会有大问题。</p><h2 id="参考"><a href="#参考" class="headerlink" title="参考"></a>参考</h2><p><a href="https://github.com/rohgar/scala-spark-4/wiki/Wide-vs-Narrow-Dependencies" target="_blank" rel="noopener">https://github.com/rohgar/scala-spark-4/wiki/Wide-vs-Narrow-Dependencies</a></p><blockquote><p>本文为学习过程中产生的总结,由于学艺不精可能有些观点或者描述有误,还望各位同学帮忙指正,共同进步。</p></blockquote>]]></content>
<tags>
<tag> spark </tag>
<tag> scala </tag>
</tags>
</entry>
<entry>
<title>使用dbutils作为pymysql的连接池时,setsession偶尔失效的问题</title>
<link href="/2019/12/23/%E4%BD%BF%E7%94%A8dbutils%E4%BD%9C%E4%B8%BApymysql%E7%9A%84%E8%BF%9E%E6%8E%A5%E6%B1%A0%E6%97%B6%EF%BC%8Csetsession%E5%81%B6%E5%B0%94%E5%A4%B1%E6%95%88%E7%9A%84%E9%97%AE%E9%A2%98.html"/>
<url>/2019/12/23/%E4%BD%BF%E7%94%A8dbutils%E4%BD%9C%E4%B8%BApymysql%E7%9A%84%E8%BF%9E%E6%8E%A5%E6%B1%A0%E6%97%B6%EF%BC%8Csetsession%E5%81%B6%E5%B0%94%E5%A4%B1%E6%95%88%E7%9A%84%E9%97%AE%E9%A2%98.html</url>
<content type="html"><![CDATA[<blockquote><p>版本情况dbutils:1.1; pymysql:0.9.3; python:2.7.13</p></blockquote><h2 id="线上情景"><a href="#线上情景" class="headerlink" title="线上情景"></a>线上情景</h2><p>最近线上维护时,由于只需要更改数据库配置,所以就重启了数据库,而python应用没有重启。在重启数据库后,日志显示正常,也能成功入库。后来接到反馈表示有部分数据没有入库,紧急重启python应用,后续数据入库正常。而我则负责找出原因以及修复bug的工作。</p><h2 id="调研原因"><a href="#调研原因" class="headerlink" title="调研原因"></a>调研原因</h2><p>在排查完其他问题后,最异常的是对于有部分请求,日志显示处理成功了,但是却没入库,排查了好几天找不到原因。为此写了demo来帮助排查,为了可以自动commit,采用的是setsession=[“set autocommit=1”]方式设置每个底层的连接为自动提交。在测试demo期间,数据库重启后之后的sql就无法入库。demo代码如下:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># -*- coding: utf-8 -*-</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> pymysql</span><br><span class="line"><span class="keyword">import</span> time</span><br><span class="line"><span class="keyword">import</span> traceback</span><br><span class="line"><span class="keyword">from</span> DBUtils.PooledDB <span class="keyword">import</span> PooledDB</span><br><span class="line"><span class="keyword">from</span> pymysql <span class="keyword">import</span> MySQLError</span><br><span class="line"></span><br><span class="line">pymysql.install_as_MySQLdb()</span><br><span class="line">con = <span class="keyword">None</span></span><br><span class="line">pooledDB = <span class="keyword">None</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">try_insert</span><span class="params">()</span>:</span></span><br><span class="line"> i = <span class="number">0</span></span><br><span class="line"> <span class="keyword">while</span> i < <span class="number">60</span>:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"============ {0} ============"</span>.format(i)</span><br><span class="line"> <span class="comment"># 除了第一次从库中拿,不用ping,直接接初始化链接</span></span><br><span class="line"> <span class="comment"># 后面如果有cache connection,则从cache中并且进行ping,如果失败则用_create()重新初始化connection</span></span><br><span class="line"> <span class="comment"># con 类型为 PooledDedicatedDBConnection</span></span><br><span class="line"> <span class="comment"># con._con 类型为 SteadyDBConnection</span></span><br><span class="line"> <span class="comment"># con._con._con 类型为 pymysql中的Connection类型</span></span><br><span class="line"> <span class="comment"># con._con._con._sock 类型为 mysql 连接</span></span><br><span class="line"> con = pooledDB.connection()</span><br><span class="line"> <span class="comment"># con._con._con.autocommit(True)</span></span><br><span class="line"> <span class="keyword">print</span> <span class="string">"con._con id = {0}"</span>.format(id(con._con))</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"con._con._con id = {0}"</span>.format(id(con._con._con))</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"con._con._con._sock id = {0}"</span>.format(id(con._con._con._sock))</span><br><span class="line"> <span class="keyword">try</span>:</span><br><span class="line"> cursor = con.cursor(pymysql.cursors.DictCursor)</span><br><span class="line"> <span class="keyword">if</span> <span class="keyword">not</span> cursor:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"cursor is {0}"</span>.format(cursor)</span><br><span class="line"> select_sql = <span class="string">"insert into user2(name,age) values('zhang', 20)"</span></span><br><span class="line"> ret_rows = cursor.execute(select_sql)</span><br><span class="line"> <span class="keyword">print</span> cursor._last_executed</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"ret_rows is {0}"</span>.format(ret_rows)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">except</span> MySQLError <span class="keyword">as</span> e:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"MySQLError error: {0}"</span>.format(e)</span><br><span class="line"> <span class="keyword">print</span> traceback.format_exc()</span><br><span class="line"> <span class="keyword">except</span> Exception <span class="keyword">as</span> e:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"Exception error: {0}"</span>.format(e)</span><br><span class="line"> <span class="keyword">print</span> traceback.format_exc()</span><br><span class="line"></span><br><span class="line"> i = i + <span class="number">1</span></span><br><span class="line"> time.sleep(<span class="number">1</span>)</span><br><span class="line"> con.close()</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">"__main__"</span>:</span><br><span class="line"> db_conf = {<span class="string">'user'</span>:<span class="string">'root'</span>,<span class="string">'passwd'</span>:<span class="string">'zhang'</span>,<span class="string">'host'</span>:<span class="string">'127.0.0.1'</span>,<span class="string">'port'</span>:<span class="number">3306</span>,<span class="string">'connect_timeout'</span>:<span class="number">5</span>,<span class="string">'db'</span>:<span class="string">'test_dbutils'</span>}</span><br><span class="line"> <span class="comment"># db_conf = {'user':'root','passwd':'zhang','host':'127.0.0.1','port':3306,'connect_timeout':5,'db':'test_dbutils',"autocommit":True}</span></span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> pooledDB = PooledDB(</span><br><span class="line"> creator=pymysql, <span class="comment"># 使用数据库连接的模块</span></span><br><span class="line"> maxconnections=<span class="number">4</span>, <span class="comment"># 连接池允许的最大连接数,0和None表示不限制连接数</span></span><br><span class="line"> mincached=<span class="number">0</span>, <span class="comment"># 初始化时,连接池中至少创建的空闲的链接,0表示不创建</span></span><br><span class="line"> maxcached=<span class="number">0</span>, <span class="comment"># 连接池中最多闲置的链接,0和None不限制</span></span><br><span class="line"> maxshared=<span class="number">0</span>, <span class="comment"># 连接池中最多共享的链接数量,0表示不共享。PS: 无用,因为pymysql和MySQLdb等模块的 threadsafety都为1,此值只有在creator.threadsafety > 1时设置才有效,否则创建的都是dedicated connection,即此连接是线程专用的。</span></span><br><span class="line"> blocking=<span class="keyword">True</span>, <span class="comment"># 连接池中如果没有可用连接后,是否阻塞等待。True,等待;False,不等待然后报错</span></span><br><span class="line"> maxusage=<span class="keyword">None</span>, <span class="comment"># 一个连接最多被重复使用的次数,None表示无限制</span></span><br><span class="line"> setsession=[<span class="string">"set autocommit=1"</span>], <span class="comment"># 开始会话前执行的命令列表。如:["set datestyle to ...", "set time zone ..."];务必要设置autocommit,否则可能导致该session的sql未提交</span></span><br><span class="line"> ping=<span class="number">1</span>, <span class="comment"># 每次从pool中取连接时ping一次检查可用性</span></span><br><span class="line"> reset=<span class="keyword">False</span>, <span class="comment"># 每次将连接放回pool时,将未提交的内容回滚;False时只对事务操作进行回滚</span></span><br><span class="line"> **db_conf</span><br><span class="line"> )</span><br><span class="line"></span><br><span class="line"> try_insert()</span><br></pre></td></tr></table></figure><a id="more"></a><p>同时分析dbutils的关键源码:</p><ul><li>PooledDB:代表线程池,负责控制连接的数量、达到上限时是否阻塞、取出连接、放回连接等连接管理层面的工作。</li><li>PooledDedicatedDBConnection:池专用连接的辅助代理类,调用pooledDB.connection()时,返回的就是这个链接。它保存了底层连接SteadyDBConnection,调用PooledDedicatedDBConnection的任何方法,除了close,都会直接调用SteadyDBConnection对应的方法。</li><li>SteadyDBConnection:稳定数据库连接,负责封装驱动层面(如pymql)的数据库连接、创建数据库连接、执行数据库连接的ping方法、执行execute方法。</li></ul><p>SteadyDBConnection的<code>_ping_check()</code>方法有重连机制,对这部分源码添加debug信息帮助排查问题:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">_ping_check</span><span class="params">(self, ping=<span class="number">1</span>, reconnect=True)</span>:</span></span><br><span class="line"> <span class="string">"""Check whether the connection is still alive using ping().</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string"> If the the underlying connection is not active and the ping</span></span><br><span class="line"><span class="string"> parameter is set accordingly, the connection will be recreated</span></span><br><span class="line"><span class="string"> unless the connection is currently inside a transaction.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string"> """</span></span><br><span class="line"> <span class="keyword">if</span> ping & self._ping:</span><br><span class="line"> <span class="keyword">try</span>: <span class="comment"># if possible, ping the connection</span></span><br><span class="line"> my_reconnect = <span class="keyword">True</span></span><br><span class="line"> alive = self._con.ping(reconnect=my_reconnect)</span><br><span class="line"> <span class="comment"># 源码为: alive = self._con.ping() </span></span><br><span class="line"> <span class="comment"># print "try to ping by pymysql(reconnect={0})".format(my_reconnect)</span></span><br><span class="line"> <span class="comment"># my_reconnect = False</span></span><br><span class="line"> <span class="comment"># try:</span></span><br><span class="line"> <span class="comment"># print "try to ping by pymysql(reconnect={0})".format(my_reconnect)</span></span><br><span class="line"> <span class="comment"># alive = self._con.ping(False) # do not reconnect</span></span><br><span class="line"> <span class="comment"># except TypeError:</span></span><br><span class="line"> <span class="comment"># print "try to ping by pymysql(reconnect={0}) did not have ping(False)".format(my_reconnect)</span></span><br><span class="line"> <span class="comment"># alive = self._con.ping()</span></span><br><span class="line"> <span class="keyword">except</span> (AttributeError, IndexError, TypeError, ValueError):</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"ping method is not available"</span></span><br><span class="line"> self._ping = <span class="number">0</span> <span class="comment"># ping() is not available</span></span><br><span class="line"> alive = <span class="keyword">None</span></span><br><span class="line"> reconnect = <span class="keyword">False</span></span><br><span class="line"> <span class="keyword">except</span> Exception,e :</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"try to ping by pymysql(reconnect={0}) fail"</span>.format(my_reconnect)</span><br><span class="line"> alive = <span class="keyword">False</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">if</span> alive <span class="keyword">is</span> <span class="keyword">None</span>:</span><br><span class="line"> alive = <span class="keyword">True</span></span><br><span class="line"> <span class="keyword">if</span> alive:</span><br><span class="line"> reconnect = <span class="keyword">False</span></span><br><span class="line"> <span class="keyword">print</span> <span class="string">"try to ping by pymysql(reconnect={0}) success"</span>.format(my_reconnect)</span><br><span class="line"> <span class="keyword">if</span> reconnect <span class="keyword">and</span> <span class="keyword">not</span> self._transaction:</span><br><span class="line"> <span class="keyword">try</span>: <span class="comment"># try to reopen the connection</span></span><br><span class="line"> <span class="keyword">print</span> <span class="string">"try to reconnect by dbutils"</span></span><br><span class="line"> con = self._create()</span><br><span class="line"> <span class="keyword">except</span> Exception:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"try to reconnect by dbutils fail"</span></span><br><span class="line"> <span class="keyword">pass</span></span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">print</span> <span class="string">"try to reconnect by dbutils success"</span></span><br><span class="line"> self._close()</span><br><span class="line"> self._store(con)</span><br><span class="line"> alive = <span class="keyword">True</span></span><br><span class="line"> <span class="keyword">return</span> alive</span><br></pre></td></tr></table></figure><p>分别修改myreconnect的值进行测试:</p><ul><li>测试1:my_reconnect=True(对于pymysql的ping默认值为True),运行demo期间重启数据库。</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">异常开始:</span><br><span class="line">try to ping by pymysql(reconnect=True)</span><br><span class="line">try to ping by pymysql(reconnect=True) fail</span><br><span class="line">try to reconnect by dbutils</span><br><span class="line">try to reconnect by dbutils fail</span><br><span class="line"></span><br><span class="line">异常恢复:</span><br><span class="line">try to ping by pymysql(reconnect=True)</span><br><span class="line">try to ping by pymysql(reconnect=True) success</span><br><span class="line"></span><br><span class="line">恢复后,不能入库</span><br></pre></td></tr></table></figure><ul><li>测试2:my_reconnect=False,运行demo期间重启数据库。</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line">异常开始:</span><br><span class="line">try to ping by pymysql(reconnect=False)</span><br><span class="line">try to ping by pymysql(reconnect=False) fail</span><br><span class="line">try to reconnect by dbutils</span><br><span class="line">try to reconnect by dbutils fail</span><br><span class="line"></span><br><span class="line">异常恢复:</span><br><span class="line">try to ping by pymysql(reconnect=False)</span><br><span class="line">try to ping by pymysql(reconnect=False) fail</span><br><span class="line">try to reconnect by dbutils</span><br><span class="line">try to reconnect by dbutils success</span><br><span class="line"></span><br><span class="line">恢复后,能够入库</span><br></pre></td></tr></table></figure><p>经过调试发现,pymysql的ping方法默认情况会进行重连,而不是dbutils的重连。所以在dbuitils的<code>_ping_check</code>方法的重连机制有几率会不执行,因为pymysql的ping已经重连了,从而导致 setsession 中的配置没有在拿连接的时候设置进去。</p><p>原因总结:</p><ol><li>pymysl的ping有个默认参数reconect,并且为true,即reconnect=True</li><li>dbutils的ping机制依赖于pymysql原生的ping,并且默认不设置任何参数,即在目前版本的dbutil下其ping默认会自动reconnect</li><li>dbutils的ping默认调用时机:只要从pool中拿连接就会进行ping</li><li>始化PooledDB时,通过setsession参数的方式,设置自动commit,即 setsession=[“set autocommit=1”]。因此只要是从dbutil拿连接的时候,都会预先配置该session,即执行业务sql前,先执行setsession的内容</li><li>在连接丢失或者其他异常时,由于pymysql的ping默认进行重连,故dbutils层面无法感知已经重连,setsession也不会再次执行,故后续该连接执行的sql不会进行commit。</li><li>这是个bug,已经提issue给dbutils的作者,在最近的版本会修复这个bug。问题的重现以及修复方法详见:<a href="https://github.com/Cito/DBUtils/issues/23" target="_blank" rel="noopener">When use pymsql driver, the setsession params for PooledDB is not work after mysql server restart</a></li></ol><h2 id="解决方案"><a href="#解决方案" class="headerlink" title="解决方案"></a>解决方案</h2><p>在dbutils未修复该bug前,可以通过PoolDB的kwargs参数透传{“autocommit”:True}到pymysql中,这样即使通过pymysql的ping方法重连的连接也会保留自动提交的功能。</p>]]></content>
<tags>
<tag> mysql </tag>
<tag> python </tag>
</tags>
</entry>
<entry>
<title>Spark学习系列之二:rdd分区数量分析</title>
<link href="/2019/12/22/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%BA%8C%EF%BC%9Ardd%E5%88%86%E5%8C%BA%E6%95%B0%E9%87%8F%E5%88%86%E6%9E%90.html"/>
<url>/2019/12/22/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%BA%8C%EF%BC%9Ardd%E5%88%86%E5%8C%BA%E6%95%B0%E9%87%8F%E5%88%86%E6%9E%90.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文源码版本为 spark 2.3.4</p></blockquote><p>创建rdd有三种方式,一种是通过SparkContext.textFile()访问外部存储创建,一种是通过输入数据集合通过调用 SparkContext.parallelize() 方法来创建,最后一种是通过转换已有的rdd生成新的rdd。</p><h2 id="通过parallelize创建rdd的分区数量分析"><a href="#通过parallelize创建rdd的分区数量分析" class="headerlink" title="通过parallelize创建rdd的分区数量分析"></a>通过parallelize创建rdd的分区数量分析</h2><p>通过parallelize的方式比较简单,相信也是大部分初学者第一次接触创建rdd的方法,那么通过这个方法创建的rdd的默认分区数是多少呢?我们通过源码进行分析。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> org.apache.spark.<span class="type">SparkContext</span></span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">SparkContext</span>(<span class="params">config: <span class="type">SparkConf</span></span>) <span class="keyword">extends</span> <span class="title">Logging</span> </span>{</span><br><span class="line"> <span class="comment">/** Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD). */</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>: <span class="type">Int</span> = {</span><br><span class="line"> assertNotStopped()</span><br><span class="line"> taskScheduler.defaultParallelism</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">parallelize</span></span>[<span class="type">T</span>: <span class="type">ClassTag</span>](</span><br><span class="line"> seq: <span class="type">Seq</span>[<span class="type">T</span>],</span><br><span class="line"> numSlices: <span class="type">Int</span> = defaultParallelism): <span class="type">RDD</span>[<span class="type">T</span>] = withScope {</span><br><span class="line"> assertNotStopped()</span><br><span class="line"> <span class="keyword">new</span> <span class="type">ParallelCollectionRDD</span>[<span class="type">T</span>](<span class="keyword">this</span>, seq, numSlices, <span class="type">Map</span>[<span class="type">Int</span>, <span class="type">Seq</span>[<span class="type">String</span>]]())</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>我们先看看parallelize是如何生成rdd的。可以看到它是通过 ParallelCollectionRDD 类创建一个rdd,其内部返回的partitioner是通过ParallelCollectionRDD伴生对象的slice方法分割seq为一个二维的Seq[Seq[T]],并把这个二维的序列传递到ParallelCollectionPartition中实例化的。</p><p>接下来是关键,<code>defaultParallelism</code>的默认值确定了分区的数量。</p><a id="more"></a><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> org.apache.spark.<span class="type">SparkContext</span></span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">SparkContext</span>(<span class="params">config: <span class="type">SparkConf</span></span>) <span class="keyword">extends</span> <span class="title">Logging</span> </span>{</span><br><span class="line"> <span class="keyword">private</span> <span class="keyword">var</span> _taskScheduler: <span class="type">TaskScheduler</span> = _</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 等于task调度器的defaultParallelism</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>: <span class="type">Int</span> = {</span><br><span class="line"> assertNotStopped()</span><br><span class="line"> taskScheduler.defaultParallelism</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"> </span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.scheduler</span><br><span class="line"></span><br><span class="line"><span class="comment">// 特质TaskScheduler,定义task调度器的方法</span></span><br><span class="line"><span class="keyword">private</span>[spark] <span class="class"><span class="keyword">trait</span> <span class="title">TaskScheduler</span> </span>{</span><br><span class="line"> <span class="comment">// 定义获取默认并行度的接口</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>(): <span class="type">Int</span></span><br><span class="line">}</span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.scheduler</span><br><span class="line"></span><br><span class="line"><span class="comment">// TaskScheduler的具体实现类</span></span><br><span class="line"><span class="keyword">private</span>[spark] <span class="class"><span class="keyword">class</span> <span class="title">TaskSchedulerImpl</span>(<span class="params"></span></span></span><br><span class="line"><span class="class"><span class="params"> val sc: <span class="type">SparkContext</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> val maxTaskFailures: <span class="type">Int</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> isLocal: <span class="type">Boolean</span> = false</span>)</span></span><br><span class="line"><span class="class"> <span class="keyword">extends</span> <span class="title">TaskScheduler</span> <span class="keyword">with</span> <span class="title">Logging</span> </span>{</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 后台调度器特质</span></span><br><span class="line"> <span class="keyword">var</span> backend: <span class="type">SchedulerBackend</span> = <span class="literal">null</span></span><br><span class="line"> <span class="comment">// 实现了TaskScheduler中的defaultParallelism接口,并返回从成员变量后台调度器特质backend返回backend.defaultParallelism()</span></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>(): <span class="type">Int</span> = backend.defaultParallelism()</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.scheduler</span><br><span class="line"></span><br><span class="line"><span class="comment">// 定义后台调度器特质</span></span><br><span class="line"><span class="keyword">private</span>[spark] <span class="class"><span class="keyword">trait</span> <span class="title">SchedulerBackend</span> </span>{</span><br><span class="line"> <span class="comment">// 定义抽象方法</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>(): <span class="type">Int</span></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.scheduler.local</span><br><span class="line"></span><br><span class="line"><span class="comment">// 本地后台调度器,为SchedulerBackend特质的一种具体实现</span></span><br><span class="line"><span class="keyword">private</span>[spark] <span class="class"><span class="keyword">class</span> <span class="title">LocalSchedulerBackend</span>(<span class="params"></span></span></span><br><span class="line"><span class="class"><span class="params"> conf: <span class="type">SparkConf</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> scheduler: <span class="type">TaskSchedulerImpl</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> val totalCores: <span class="type">Int</span></span>)</span></span><br><span class="line"><span class="class"> <span class="keyword">extends</span> <span class="title">SchedulerBackend</span> <span class="keyword">with</span> <span class="title">ExecutorBackend</span> <span class="keyword">with</span> <span class="title">Logging</span> </span>{</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 实现SchedulerBackend中的defaultParallelism方法,返回配置中的"spark.default.parallelism",</span></span><br><span class="line"> <span class="comment">// 如果没有定义则返回从SparkContext传入的totalCores。SparkContex的master为 local 则totalCores=1;</span></span><br><span class="line"> <span class="comment">// master为local[*] 则totalCores=Runtime.getRuntime.availableProcessors();</span></span><br><span class="line"> <span class="comment">// master为local[N],则totalCores=N</span></span><br><span class="line"> <span class="comment">// 传入totalcores的计算见org.apache.spark.SparkContext.createTaskScheduler()方法</span></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>(): <span class="type">Int</span> =</span><br><span class="line"> scheduler.conf.getInt(<span class="string">"spark.default.parallelism"</span>, totalCores)</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark.scheduler.cluster</span><br><span class="line"></span><br><span class="line"><span class="comment">// 为StandaloneSchedulerBackend调度器的父类,适用于standalone模式</span></span><br><span class="line"><span class="keyword">private</span>[spark]</span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">CoarseGrainedSchedulerBackend</span>(<span class="params">scheduler: <span class="type">TaskSchedulerImpl</span>, val rpcEnv: <span class="type">RpcEnv</span></span>)</span></span><br><span class="line"><span class="class"> <span class="keyword">extends</span> <span class="title">ExecutorAllocationClient</span> <span class="keyword">with</span> <span class="title">SchedulerBackend</span> <span class="keyword">with</span> <span class="title">Logging</span> </span>{</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// Use an atomic variable to track total number of cores in the cluster for simplicity and speed</span></span><br><span class="line"> <span class="comment">// totalCoreCount会根据注册/解注册的executor的core数量动态进行增减</span></span><br><span class="line"> <span class="keyword">protected</span> <span class="keyword">val</span> totalCoreCount = <span class="keyword">new</span> <span class="type">AtomicInteger</span>(<span class="number">0</span>)</span><br><span class="line"> <span class="keyword">protected</span> <span class="keyword">val</span> conf = scheduler.sc.conf</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 实现SchedulerBackend中的defaultParallelism方法,返回配置中的"spark.default.parallelism",</span></span><br><span class="line"> <span class="comment">// 如果没有定义则返回 max(totalCoreCount, 2),注意totalCoreCount并不一定是运行命令时`--total-executor-cores`申请spark.cores.max值</span></span><br><span class="line"> <span class="comment">// totalCoreCount小于spark.cores.max,当集群资源不够或者超时时,也会直接运行:</span></span><br><span class="line"> <span class="comment">// 1. 计算totalCoreCount > spark.cores.max * spark.scheduler.minRegisteredResourcesRatio(默认为0)</span></span><br><span class="line"> <span class="comment">// 2. 计算等待时间 maxRegisteredWaitingTimeMs,当其大于spark.scheduler.maxRegisteredResourcesWaitingTime(默认为30s)时</span></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">defaultParallelism</span></span>(): <span class="type">Int</span> = {</span><br><span class="line"> conf.getInt(<span class="string">"spark.default.parallelism"</span>, math.max(totalCoreCount.get(), <span class="number">2</span>))</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>通过以上分析,我们可知通过parallelize创建rdd时,分区数量根据以下情况确定</p><ul><li>如果部署模式为local:<ul><li>如果定义了<code>spark.default.parallelism</code>则以其值作为分区大小</li><li>如果没有定义<code>spark.default.parallelism</code>,则以解析master参数中指定的值为分区大小</li></ul></li><li>如果部署模式为standalone:<ul><li>如果定义了<code>spark.default.parallelism</code>则以其值作为分区大小</li><li>如果没有定义<code>spark.default.parallelism</code>,则为math.max(totalCoreCount, 2),其中totalCoreCount为executor注册的所拥有core数量,不一定是申请core的总数。</li></ul></li></ul><blockquote><p>TODO yarn模式的还未考虑,以后有时间加进来</p></blockquote><h2 id="对现有rdd进行transformation后分区数量分析"><a href="#对现有rdd进行transformation后分区数量分析" class="headerlink" title="对现有rdd进行transformation后分区数量分析"></a>对现有rdd进行transformation后分区数量分析</h2><p>上一小节通过分析后台调度器的相关源码,我们已经知道通过parallelize创建rdd时partition的确定方法。这一节我们探讨通过转换前后分区数量如何确定。</p><h3 id="以map-为例"><a href="#以map-为例" class="headerlink" title="以map()为例"></a>以map()为例</h3><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> org.apache.spark.rdd</span><br><span class="line"></span><br><span class="line"><span class="comment">// map等转换的底层实现是MapPartitionsRDD</span></span><br><span class="line"><span class="keyword">private</span>[spark] <span class="class"><span class="keyword">class</span> <span class="title">MapPartitionsRDD</span>[<span class="type">U</span>: <span class="type">ClassTag</span>, <span class="type">T</span>: <span class="type">ClassTag</span>](<span class="params"></span></span></span><br><span class="line"><span class="class"><span class="params"> var prev: <span class="type">RDD</span>[<span class="type">T</span>],</span></span></span><br><span class="line"><span class="class"><span class="params"> f: (<span class="type">TaskContext</span>, <span class="type">Int</span>, <span class="type">Iterator</span>[<span class="type">T</span>]</span>) <span class="title">=></span> <span class="title">Iterator</span>[<span class="type">U</span>], <span class="title">//</span> (<span class="params"><span class="type">TaskContext</span>, partition index, iterator</span>)</span></span><br><span class="line"><span class="class"> <span class="title">preservesPartitioning</span></span>: <span class="type">Boolean</span> = <span class="literal">false</span>,</span><br><span class="line"> isOrderSensitive: <span class="type">Boolean</span> = <span class="literal">false</span>)</span><br><span class="line"> <span class="keyword">extends</span> <span class="type">RDD</span>[<span class="type">U</span>](prev) {</span><br><span class="line"> <span class="comment">// 这里需要注意,实例化MapPartitionsRDD时,会调用RDD的单参数rdd的构造方法。</span></span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 分区器继承血统中第一个父类的partitioner(对于map来说只有一个父rdd),如果有的话</span></span><br><span class="line"> <span class="keyword">override</span> <span class="keyword">val</span> partitioner = <span class="keyword">if</span> (preservesPartitioning) firstParent[<span class="type">T</span>].partitioner <span class="keyword">else</span> <span class="type">None</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">// 分区继承血统中第一个父类的partitions(对于map来说只有一个父rdd)</span></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">getPartitions</span></span>: <span class="type">Array</span>[<span class="type">Partition</span>] = firstParent[<span class="type">T</span>].partitions</span><br><span class="line"></span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">compute</span></span>(split: <span class="type">Partition</span>, context: <span class="type">TaskContext</span>): <span class="type">Iterator</span>[<span class="type">U</span>] =</span><br><span class="line"> f(context, split.index, firstParent[<span class="type">T</span>].iterator(split, context))</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="keyword">abstract</span> <span class="class"><span class="keyword">class</span> <span class="title">RDD</span>[<span class="type">T</span>: <span class="type">ClassTag</span>](<span class="params"></span></span></span><br><span class="line"><span class="class"><span class="params"> @transient private var _sc: <span class="type">SparkContext</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> @transient private var deps: <span class="type">Seq</span>[<span class="type">Dependency</span>[_]]</span></span></span><br><span class="line"><span class="class"><span class="params"> </span>) <span class="keyword">extends</span> <span class="title">Serializable</span> <span class="keyword">with</span> <span class="title">Logging</span> </span>{</span><br><span class="line"></span><br><span class="line"> <span class="comment">/** Construct an RDD with just a one-to-one dependency on one parent */</span></span><br><span class="line"> <span class="comment">// 实现单参数rdd的构造方法</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">this</span></span>(<span class="meta">@transient</span> oneParent: <span class="type">RDD</span>[_]) =</span><br><span class="line"> <span class="keyword">this</span>(oneParent.context, <span class="type">List</span>(<span class="keyword">new</span> <span class="type">OneToOneDependency</span>(oneParent)))</span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Implemented by subclasses to return how this RDD depends on parent RDDs. This method will only</span></span><br><span class="line"><span class="comment"> * be called once, so it is safe to implement a time-consuming computation in it.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="comment">// getDependencies等于rdd构造方法参数中的deps</span></span><br><span class="line"> <span class="keyword">protected</span> <span class="function"><span class="keyword">def</span> <span class="title">getDependencies</span></span>: <span class="type">Seq</span>[<span class="type">Dependency</span>[_]] = deps</span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Get the list of dependencies of this RDD, taking into account whether the</span></span><br><span class="line"><span class="comment"> * RDD is checkpointed or not.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="keyword">final</span> <span class="function"><span class="keyword">def</span> <span class="title">dependencies</span></span>: <span class="type">Seq</span>[<span class="type">Dependency</span>[_]] = {</span><br><span class="line"> checkpointRDD.map(r => <span class="type">List</span>(<span class="keyword">new</span> <span class="type">OneToOneDependency</span>(r))).getOrElse {</span><br><span class="line"> <span class="keyword">if</span> (dependencies_ == <span class="literal">null</span>) {</span><br><span class="line"> <span class="comment">// 先不考虑checkpoint的情况,则dependencies= dependencies_ = getDependencies</span></span><br><span class="line"> dependencies_ = getDependencies</span><br><span class="line"> }</span><br><span class="line"> dependencies_</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line"> <span class="comment">/** Returns the first parent RDD */</span></span><br><span class="line"> <span class="keyword">protected</span>[spark] <span class="function"><span class="keyword">def</span> <span class="title">firstParent</span></span>[<span class="type">U</span>: <span class="type">ClassTag</span>]: <span class="type">RDD</span>[<span class="type">U</span>] = {</span><br><span class="line"> <span class="comment">// firstParent为dependencies容器中第一个元素</span></span><br><span class="line"> dependencies.head.rdd.asInstanceOf[<span class="type">RDD</span>[<span class="type">U</span>]]</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>这里需要注意区分Partitioner和Partition。Partitioner是分区器,需要定义分区的数量numPartitions,以及通过传入key决定其在哪个partition的getPartition(key: Any)方法。而Partition则描述了当前rdd的分区状态,对于map而言其分区状态和父rdd一致。当然rdd也可以没有Partitioner就有Parition的情况,如默认情况下经过map转换的rdd,以及本文第一部分描述通过parallelize创建rdd,都是没有partitioner,其partitioner为None。</p><p>通过追溯firstParent,可知firstParent <- dependencies.head <- dependencies_.head <- getDependencies.head <- deps.head <- List(new OneToOneDependency(pre).head (这里完成rdd到dependency的转换),其中pre为调用map方法的rdd,即 MapPartitionsRDD 的父rdd。</p><p>回到map的paritions数量为多少的问题,从源码中也能看到其partitions将保持血统中第一个的父类的partition,不会改变原有的分区情况。但是也不会保留原有的分区器。</p><p>而类似的,flatMap的实现也和map一致。filter也差不多,由于其不会更改父rdd的key,所以preservesPartitioning为true,保留了血统中第一个父类的partitioner。</p><h3 id="以reduceByKey-为例"><a href="#以reduceByKey-为例" class="headerlink" title="以reduceByKey()为例"></a>以reduceByKey()为例</h3><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> org.apache.spark.rdd</span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">PairRDDFunctions</span>[<span class="type">K</span>, <span class="type">V</span>](<span class="params">self: <span class="type">RDD</span>[(<span class="type">K</span>, <span class="type">V</span></span>)])</span></span><br><span class="line"><span class="class"> (<span class="params">implicit kt: <span class="type">ClassTag</span>[<span class="type">K</span>], vt: <span class="type">ClassTag</span>[<span class="type">V</span>], ord: <span class="type">Ordering</span>[<span class="type">K</span>] = null</span>)</span></span><br><span class="line"><span class="class"> <span class="keyword">extends</span> <span class="title">Logging</span> <span class="keyword">with</span> <span class="title">Serializable</span> </span>{</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">reduceByKey</span></span>(func: (<span class="type">V</span>, <span class="type">V</span>) => <span class="type">V</span>): <span class="type">RDD</span>[(<span class="type">K</span>, <span class="type">V</span>)] = self.withScope {</span><br><span class="line"> <span class="comment">// 通过org.apache.spark.Partitioner.defaultPartitioner创建分区器</span></span><br><span class="line"> reduceByKey(defaultPartitioner(self), func)</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">--------</span><br><span class="line"><span class="keyword">package</span> org.apache.spark</span><br><span class="line"></span><br><span class="line"><span class="comment">/**</span></span><br><span class="line"><span class="comment"> * An object that defines how the elements in a key-value pair RDD are partitioned by key.</span></span><br><span class="line"><span class="comment"> * Maps each key to a partition ID, from 0 to `numPartitions - 1`.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * Note that, partitioner must be deterministic, i.e. it must return the same partition id given</span></span><br><span class="line"><span class="comment"> * the same partition key.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="comment">// 抽象分区器</span></span><br><span class="line"><span class="keyword">abstract</span> <span class="class"><span class="keyword">class</span> <span class="title">Partitioner</span> <span class="keyword">extends</span> <span class="title">Serializable</span> </span>{</span><br><span class="line"> <span class="comment">// 需要分多少个区</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">numPartitions</span></span>: <span class="type">Int</span></span><br><span class="line"> <span class="comment">// 传入key,就返回其应该存在哪个分区</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">getPartition</span></span>(key: <span class="type">Any</span>): <span class="type">Int</span></span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">object</span> <span class="title">Partitioner</span> </span>{</span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Choose a partitioner to use for a cogroup-like operation between a number of RDDs.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * If spark.default.parallelism is set, we'll use the value of SparkContext defaultParallelism</span></span><br><span class="line"><span class="comment"> * as the default partitions number, otherwise we'll use the max number of upstream partitions.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * When available, we choose the partitioner from rdds with maximum number of partitions. If this</span></span><br><span class="line"><span class="comment"> * partitioner is eligible (number of partitions within an order of maximum number of partitions</span></span><br><span class="line"><span class="comment"> * in rdds), or has partition number higher than default partitions number - we use this</span></span><br><span class="line"><span class="comment"> * partitioner.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * Otherwise, we'll use a new HashPartitioner with the default partitions number.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * Unless spark.default.parallelism is set, the number of partitions will be the same as the</span></span><br><span class="line"><span class="comment"> * number of partitions in the largest upstream RDD, as this should be least likely to cause</span></span><br><span class="line"><span class="comment"> * out-of-memory errors.</span></span><br><span class="line"><span class="comment"> *</span></span><br><span class="line"><span class="comment"> * We use two method parameters (rdd, others) to enforce callers passing at least 1 RDD.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="comment">// 传入一个rdd以及传入可变长rdd参数 other(即可以不传,也可以传一个或者多个)</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">defaultPartitioner</span></span>(rdd: <span class="type">RDD</span>[_], others: <span class="type">RDD</span>[_]*): <span class="type">Partitioner</span> = {</span><br><span class="line"> <span class="comment">// 拼接两个rdd到序列</span></span><br><span class="line"> <span class="keyword">val</span> rdds = (<span class="type">Seq</span>(rdd) ++ others)</span><br><span class="line"> <span class="comment">// 过滤rdds序列中有partitioner并且对应的numPartitions>0的rdds序列</span></span><br><span class="line"> <span class="keyword">val</span> hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > <span class="number">0</span>))</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 从rdds序列中选择partitioner中partition数量的rdd,称为最大分区器rdd</span></span><br><span class="line"> <span class="keyword">val</span> hasMaxPartitioner: <span class="type">Option</span>[<span class="type">RDD</span>[_]] = <span class="keyword">if</span> (hasPartitioner.nonEmpty) {</span><br><span class="line"> <span class="type">Some</span>(hasPartitioner.maxBy(_.partitions.length))</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="type">None</span></span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 定义默认的分区数量</span></span><br><span class="line"> <span class="keyword">val</span> defaultNumPartitions = <span class="keyword">if</span> (rdd.context.conf.contains(<span class="string">"spark.default.parallelism"</span>)) {</span><br><span class="line"> <span class="comment">// 如果定义了"spark.default.parallelism",则为其值</span></span><br><span class="line"> rdd.context.defaultParallelism</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="comment">// 否则为rdds序列中各个rdd分区数的最大值</span></span><br><span class="line"> rdds.map(_.partitions.length).max</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// If the existing max partitioner is an eligible one, or its partitions number is larger</span></span><br><span class="line"> <span class="comment">// than the default number of partitions, use the existing partitioner.</span></span><br><span class="line"> <span class="keyword">if</span> (hasMaxPartitioner.nonEmpty && (isEligiblePartitioner(hasMaxPartitioner.get, rdds) ||</span><br><span class="line"> defaultNumPartitions < hasMaxPartitioner.get.getNumPartitions)) {</span><br><span class="line"> <span class="comment">// 如果有最大分区器rdd,并且其分区数是合理的;或者有最大分区器rdd,并且其分区数量大于默认的分区数量defaultNumPartitions;返回最大分区器rdd的partitioner</span></span><br><span class="line"> <span class="comment">// 这个if-else语句嵌套到上一个if-else语句的话,代码会更加清晰?</span></span><br><span class="line"> hasMaxPartitioner.get.partitioner.get</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="comment">// 否则将以默认分区数量defaultNumPartitions实例化一个HashPartitioner,并返回</span></span><br><span class="line"> <span class="keyword">new</span> <span class="type">HashPartitioner</span>(defaultNumPartitions)</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Returns true if the number of partitions of the RDD is either greater than or is less than and</span></span><br><span class="line"><span class="comment"> * within a single order of magnitude of the max number of upstream partitions, otherwise returns</span></span><br><span class="line"><span class="comment"> * false.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="comment">// 判断最大分区器的rdd的分区数目对于其他rdd是否合理</span></span><br><span class="line"> <span class="keyword">private</span> <span class="function"><span class="keyword">def</span> <span class="title">isEligiblePartitioner</span></span>(</span><br><span class="line"> hasMaxPartitioner: <span class="type">RDD</span>[_],</span><br><span class="line"> rdds: <span class="type">Seq</span>[<span class="type">RDD</span>[_]]): <span class="type">Boolean</span> = {</span><br><span class="line"> <span class="comment">// 获取rdds序列中最大的分区数量</span></span><br><span class="line"> <span class="keyword">val</span> maxPartitions = rdds.map(_.partitions.length).max</span><br><span class="line"> <span class="comment">// 如果rdds序列中最大的分区数量不大于最大分区器分区数量一个数量级,则返回true;否则返回false</span></span><br><span class="line"> log10(maxPartitions) - log10(hasMaxPartitioner.getNumPartitions) < <span class="number">1</span></span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>对于reduceByKey方法,当不传numPartitions参数时,其默认的分区器由defaultPartitioner()方法决定,分区器就决定了分区数。</p><p>defaultPartitioner()的决定分区器规则总结如下:</p><ul><li>如果定义了”spark.default.parallelism”,则defaultNumPartitions = “spark.default.parallelism” ;如果未定义,则defaultNumPartitions等于所有rdd分区中最大的分区数</li><li>如果在所有rdd中有对应的partitioner,则选出分区数量最大的partitioner,并且该partitioner的分区数满足以下两个条件之一,则返回该partitioner作为API的partitioner<ul><li>分区数量是合理的</li><li>分区数量大于defaultNumPartitions</li></ul></li><li>否则,返回HashPartitioner(defaultNumPartitions)</li></ul><p>总结,对于reduceByKey等类似的API而言,只要是通过defaultPartitioner()定义分区器的,其分区数量有三种情况:</p><ul><li>等于默认值spark.default.parallelism</li><li>等于所有rdd中最大partition数量</li><li>等于所有partitioner中最大partition数量</li></ul><p>也可以看出此类型的转换,partition数量总是趋向于变大,而”spark.default.parallelism”是个平衡点。</p><p>如果定义了”spark.default.parallelism”:</p><ul><li>如果它定义的很小,对于没有分区器则分区数量很小。对于有分区器,defaultNumPartitions < hasMaxPartitioner.get.getNumPartitions几乎永远为true,将保持最大分区器的分区数量,不会主动干预原来的分区情况。</li><li>如果它定义的很大,对于没有分区器则分区数量很大。对于有分区器,defaultNumPartitions < hasMaxPartitioner.get.getNumPartitions几乎永远为false,结果依赖于最大分区器的分区数量小于分区数量最大的rdd的程度,如果相差不大则保留原来的分区器,如果相差很大,则以”spark.default.parallelism”作为新分区大小。</li></ul><p>如果没定义”spark.default.parallelism”:</p><ul><li>对于没有分区器,则分区数量等于所有rdd中最大partition数量。</li><li>对于有分区器,defaultNumPartitions < hasMaxPartitioner.get.getNumPartitions永远为false,结果依赖于最大分区器的分区数小于分区数量最大的rdd的程度,如果相差不大则保留原来的分区器,如果相差很大,则以所有rdd的最大分区数为新分区大小。</li></ul><h2 id="保持partitioner的transformation"><a href="#保持partitioner的transformation" class="headerlink" title="保持partitioner的transformation"></a>保持partitioner的transformation</h2><p>如上所述,rdd的parittioner是决定分区数量的重要因素,对于以下transformation <strong>默认</strong> 将会保留和传播partitioner: </p><ul><li>cogroup</li><li>groupWith</li><li>join</li><li>leftOuterJoin</li><li>rightOuterJoin</li><li>groupByKey</li><li>reduceByKey</li><li>foldByKey</li><li>combineByKey</li><li>partitionBy</li><li>mapValues </li><li>flatMapValues </li><li>filter </li></ul><p>其他transfermation将默认不保持分区器。因为其他操作(比如map)可能会修改key,修改了key后,原来的分区器就失去了它的意义。相反的,mapValues只修改value不修改key,所以其保留和传播分区器是合理的。</p><h2 id="参考"><a href="#参考" class="headerlink" title="参考"></a>参考</h2><p><a href="https://github.com/rohgar/scala-spark-4/wiki/Partitioning" target="_blank" rel="noopener">https://github.com/rohgar/scala-spark-4/wiki/Partitioning</a></p><blockquote><p>TODO 通过文件创建rdd还未考虑,以后有时间加进来<br>本文为学习过程中产生的总结,由于学艺不精可能有些观点或者描述有误,还望各位同学帮忙指正,共同进步。</p></blockquote>]]></content>
<tags>
<tag> spark </tag>
<tag> scala </tag>
</tags>
</entry>
<entry>
<title>Spark学习系列之一:新手常见问题</title>
<link href="/2019/12/16/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80%EF%BC%9A%E6%96%B0%E6%89%8B%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98.html"/>
<url>/2019/12/16/Spark%E5%AD%A6%E4%B9%A0%E7%B3%BB%E5%88%97%E4%B9%8B%E4%B8%80%EF%BC%9A%E6%96%B0%E6%89%8B%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98.html</url>
<content type="html"><![CDATA[<blockquote><p>如无特别说明,本文源码版本为 spark 2.3.4</p></blockquote><p>学习spark有一段时间了,最近想动动手写个<a href="https://github.com/salmon7/spark-and-scala-learning" target="_blank" rel="noopener">demo</a>出来,大致的功能是从kafka读取用户点击记录,用spark streaming对这些数据进行读取并统计用户一段时间的点击记录,期望最后能落盘到redis中供需求方调用。</p><p>这个demo看似简单,但是作为一个新手,我也遇到了一些看起来比较奇怪的问题。再此总结一下我遇到的一些问题,希望能给遇到同样问题的人带来一些帮助。</p><h2 id="问题一:spark的并行度是多少?"><a href="#问题一:spark的并行度是多少?" class="headerlink" title="问题一:spark的并行度是多少?"></a>问题一:spark的并行度是多少?</h2><p>我相信一开始接触的初学者对此肯定有疑惑,并行度指的什么?我认为在spark中,这个并行度指的是partition的数量,无论是通过parallelize初始化rdd,还是通过join和reduceByKey等shuffle操作,都意味着需要确定这个新rdd的paritition数量。这里涉及到一个参数<code>spark.default.parallelism</code>,该参数<strong>大多数情况下</strong>是parallelize、join、reducdeByKey等操作的<strong>默认</strong>并行度。如果不定义这个参数,默认情况下分区数量在不同情景的情况下有所不同:</p><ul><li>对于join和reduceByKey等shuffle操作,分区数一般为多个父rdd中partition数目最大的一个。</li><li>对于parallelize进行初始化操作,分区数在不同部署模式下不同:<ul><li>local[*]:本地cpu的core数量,local[N]则为N,local则为1</li><li>meos:默认为8</li><li>other:一般为executor个数 * 每个executor的core个数</li></ul></li><li>当然如果定义了<code>spark.default.parallelism</code>参数,其默认分区数也不一定是其值,具体分析见<a href="/2019/12/22/Spark学习系列之二:rdd分区数量分析.html">Spark学习系列之二:rdd分区数量分析</a>。实际api中也能通过传递numPartitions参数覆盖<code>spark.default.parallelism</code>,自行决定并行度。</li><li>比如正在使用的mac是四核,假设向yarn申请executor个数为2,每个executor的core数量为1,那么spark.default.parallelism的值为2,这时一般情况下是不能充分利用其申请核数资源的,最好是申请核数的2~3倍。可以通过 –conf 传入参数 <code>--conf spark.default.parallelism = 4</code> 或者 <code>--conf spark.default.parallelism = 6</code>,使其默认值为申请核数的2~3倍。如果有的task执行比较快,core就空闲出来了,为了多利用core就设置task数量为2~3倍。当然最后的并行度还需要根据实际情况进行分析。</li></ul><blockquote><p>如何确定本机核数?通过local[*]模式进行parallelize初始化rdd,再输出myrdd.partitions.size即可得,也可以通过java代码Runtime.getRuntime.availableProcessors()获得</p></blockquote><p>参考:<br><a href="https://spark.apache.org/docs/latest/configuration.html" target="_blank" rel="noopener">https://spark.apache.org/docs/latest/configuration.html</a><br><a href="http://spark.apache.org/docs/latest/tuning.html" target="_blank" rel="noopener">http://spark.apache.org/docs/latest/tuning.html</a></p><h2 id="问题二:standalone模式下,executor个数和executor核数如何确定?"><a href="#问题二:standalone模式下,executor个数和executor核数如何确定?" class="headerlink" title="问题二:standalone模式下,executor个数和executor核数如何确定?"></a>问题二:standalone模式下,executor个数和executor核数如何确定?</h2><p>由于需要通过spark streaming读取kafka,如果对应topic的partition数量已知,那么应该启动对应个数的executor,因为kafka的一个parition同一时间只允许同一个groupid的consumer读取,如果topic的partition为1,申请的executor为2,那么将只有一个executor的资源得到了利用。</p><p>既然executor个数比较重要,yarn模式可以通过<code>--num-executors</code>确定executor个数,那standalone模式如何确定的呢?直接先说结论:</p><ul><li>executor的数量 = total-executor-cores/executor-cores</li><li><code>--total-executor-cores</code> 对应配置 <code>spark.cores.max</code> (default: <code>spark.deploy.defaultCores</code>),表示一个application最大能用的core数目;如果没有设置则默认上限为<code>spark.deploy.defaultCores</code>,该配置的值默认为infinite(不限)</li><li><code>--executor-cores</code> 对应配置 <code>spark.executor.cores</code>,表示每个executor的core数目</li><li>可以看到standalone的executor数量并不能直接指定,而是通过core的换算得到的,如果对executor数目有要求的话,可以额外关注一下。</li></ul><blockquote><p>以下是我写demo过程遇到问题,以及解决问题的大致流程。</p></blockquote><p>在写demo过程中通过spark-sumbit提交任务时,忘了写master,但是通过<code>--executor-cores</code>指定了每个executor的core数量。等应用跑起来后,发现spark ui上,发现worker上有1个executor,每个executor4个core,这显然不符合的预期。明明通过<code>--executor-cores</code>指定了executor的core数量,为什么申请到的core数目不符合预期?即使spark-submit的script中没包含master,但是程序是指定了master(spark://zhangqilongdeMacBook-Air.local:7077)。我决定进行多次调整参数,验证每种情况下申请到executor数量和每个executor的core数量,总结如下:</p><ul><li>master和executor-cores,只配置一个或者两个都不配,则只申请一个executor,并且executor将尽量使用worker的所有core。</li><li>master和executor-cores两个都配,则申请的executor数量 = workder core的总数/executor-cores,每个executor的core数量和executor-cores一致。</li></ul><p>通过源码可以发现:</p><ul><li><code>--executor-cores</code>只有在–master为standalone、yarn、kubernetes模式下才会生效,如果不是这些模式,将会通过<strong>默认配置文件</strong>指定缺失的值。即如果不指定master的情况下(默认为local[*]),<code>--executor-cores</code>并不会生效,并且使用 <code>SPAKR_HOME/conf/spark-defaults.conf</code>配置文件中的值对其赋值,如果该配置文件中依然不存在,则为spark系统默认对该变量的值,即infinite(不限)。</li><li><code>--total-executor-cores</code>可以配置standalone每个application可以用的核总数(其实通过spark-submit命令行的提示也能看出来,因为yarn模式下该值不可配所以一开始这个配置被我忽略了)</li></ul><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line">org.apache.spark.deploy.<span class="type">Submit</span></span><br><span class="line"></span><br><span class="line">...省略部分代码</span><br><span class="line"><span class="comment">//可以看到spark.executor.cores只在某些情况下才会被赋值</span></span><br><span class="line"><span class="type">OptionAssigner</span>(args.executorCores, <span class="type">STANDALONE</span> | <span class="type">YARN</span> | <span class="type">KUBERNETES</span>, <span class="type">ALL_DEPLOY_MODES</span>, confKey = <span class="string">"spark.executor.cores"</span>),</span><br><span class="line"><span class="type">OptionAssigner</span>(args.totalExecutorCores, <span class="type">STANDALONE</span> | <span class="type">MESOS</span> | <span class="type">KUBERNETES</span>, <span class="type">ALL_DEPLOY_MODES</span>, confKey = <span class="string">"spark.cores.max"</span>)</span><br><span class="line"></span><br><span class="line">...省略部分代码</span><br><span class="line"> <span class="comment">// Load any properties specified through --conf and the default properties file</span></span><br><span class="line"> <span class="comment">// 通过sparkProperties(已经读取了spark-defaluts.conf内动)hashMap对缺失配置进行填充。</span></span><br><span class="line"> <span class="keyword">for</span> ((k, v) <- args.sparkProperties) {</span><br><span class="line"> sparkConf.setIfMissing(k, v)</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line">...省略部分代码</span><br></pre></td></tr></table></figure><p>参考:<br><a href="https://spark.apache.org/docs/latest/spark-standalone.html" target="_blank" rel="noopener">https://spark.apache.org/docs/latest/spark-standalone.html</a></p><a id="more"></a><h2 id="问题三:yarn的container个数和container核数如何确定?"><a href="#问题三:yarn的container个数和container核数如何确定?" class="headerlink" title="问题三:yarn的container个数和container核数如何确定?"></a>问题三:yarn的container个数和container核数如何确定?</h2><p>对于executor数量,相比较standalone,yarn模式下会简单很多。它会在container中运行一个executor,并且可以通过 <code>--num-executors</code> 控制executor的数量。另外由于yarn需要Application Master向集群申请资源等操作,需要额外创建一个container运行Application Master进程。所以yarn的container数量= num-executors + 1。</p><p>而对于yarn container的vcores数量,发现spark-submit的<code>--executor-cores</code>参数始终没有生效,但是从spark-submit的提示语中该参数是对yarn模式生效的,为什么会没有生效?网上很多文章都没说清楚原因,直到我找到<strong>cloudera</strong>的一篇文章。大致总结一下:</p><ul><li>yarn默认的资源调度器(<code>DefaultResourceCalculator</code>)是只考虑memory的,cpu不在考虑范围内;</li><li>只有改了capacity-scheduler.xml中的<code>yarn.scheduler.capacity.resource-calculator</code>配置为<code>DominantResourceCalculator</code>,那么yarn调度器的时候会同时考虑memory和cpu两个维度。</li><li>改了默认的调度器可能带来的问题是,能够运行的container数量会较少,内存利用也会大大降低,集群吞吐量也会随之降低。</li></ul><p>我在本地机器上改了默认的调度器的前后对比如下:</p><ul><li>DefaultResourceCalculator默认调度器:</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">container面板:</span><br><span class="line">Resource:2048 Memory, 1 VCores</span><br><span class="line"></span><br><span class="line">About the Cluster面板</span><br><span class="line">Scheduler Type | Scheduling Resource Type | Minimum Allocation | Maximum Allocation</span><br><span class="line">Capacity Scheduler | [MEMORY] | <memory:1024, vCores:1> | <memory:8192, vCores:32></span><br></pre></td></tr></table></figure><ul><li>DominantResourceCalculator调度器:</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line">container面板:</span><br><span class="line">Resource:2048 Memory, 2 VCores</span><br><span class="line"></span><br><span class="line">About the Cluster面板</span><br><span class="line">Scheduler Type | Scheduling Resource Type | Minimum Allocation | Maximum Allocation</span><br><span class="line">Capacity Scheduler | [MEMORY, CPU] | <memory:1024, vCores:1> | <memory:8192, vCores:8></span><br></pre></td></tr></table></figure><p>可以看到通过修改默认的调度器实现了vcores的正确分配。</p><blockquote><ul><li>即使当yarn的vcore数目跟<code>--executor-cores</code>对不上时,在spark ui的Environment页面spark.executor.cores依然是和<code>--executor-cores</code>相等的,可以看到在spark层面它依然认为有executor-cores个core,内部应该会初始化对应个数的线程去处理task。 </li><li>后面有时间的话,可以写一篇文章分析一下这两个资源计算器的算法。</li></ul></blockquote><p>参考:<br><a href="https://blog.cloudera.com/managing-cpu-resources-in-your-hadoop-yarn-clusters/" target="_blank" rel="noopener">Managing CPU Resources in your Hadoop YARN Clusters</a><br><a href="http://site.clairvoyantsoft.com/understanding-resource-allocation-configurations-spark-application/" target="_blank" rel="noopener">Understanding Resource Allocation configurations for a Spark application</a><br><a href="https://stackoverflow.com/questions/38368985/spark-on-yarn-too-less-vcores-used" target="_blank" rel="noopener">Spark on YARN too less vcores used</a><br><a href="https://stackoverflow.com/questions/25563736/yarn-is-not-honouring-yarn-nodemanager-resource-cpu-vcores" target="_blank" rel="noopener">yarn is not honouring yarn.nodemanager.resource.cpu-vcores</a><br><a href="https://blog.cloudera.com/how-to-tune-your-apache-spark-jobs-part-1/" target="_blank" rel="noopener">How-to: Tune Your Apache Spark Jobs (Part 1)</a><br><a href="https://blog.cloudera.com/how-to-tune-your-apache-spark-jobs-part-2/" target="_blank" rel="noopener">How-to: Tune Your Apache Spark Jobs (Part 2)</a></p><h2 id="问题四:spark-streaming的checkpoint"><a href="#问题四:spark-streaming的checkpoint" class="headerlink" title="问题四:spark streaming的checkpoint"></a>问题四:spark streaming的checkpoint</h2><p>spark streaming的checkpoint数据包含两种,第一种是元数据,包括配置、DStream的操作链、未完成的批次,这些主要是用来重启driver;第二种是rdd,一般对于无状态的rdd其实可以不用checkpoint,当然这样子可能会造成已接收但未处理的数据丢失,而对于<strong>跨批次有状态</strong>的rdd需要记忆之前的状态,同时也为了避免rdd血统过长导致存储空间过大,需要定时进行checkpoint。</p><ul><li>从源码上看,updateStateByKey和reduceByKeyAndWindow (有inverse函数) 的底层实现均为StateDStream</li><li><p>StateDStream的 checkpoint 间隔为BatchInterval(即每个batch的间隔)的整数倍(默认为1倍),并且最小为10s</p><ul><li>即 StateDstream的checkpoint Interval = max(BatchInterval*n, 10), n=1,2,3,4….</li><li>官网原话:For stateful transformations that require RDD checkpointing, the default interval is a multiple of the batch interval that is at least 10 seconds. 这里说明了checkpoint interval 的最小为10,并且必须为BatchInterval的整数倍,<strong>其实还可以加上默认等于BatchInterval</strong>,不然还以为一定要手动调用StateDstream的<code>checkpoint</code>方法,如The default checkpoint interval of statefull dstream is same as batch interval。</li><li>从源码看的话,StateDStream覆盖了DStream的<code>mustCheckpoint</code>,并且指定为true,这也侧面说明StateDStream会默认进行checkpoint,并且不指定checkpoint directory时会报错。</li></ul></li><li><p>除了定时checkpoint外,还需要定时清理保存的数据</p><ul><li>这个周期一般为checkpoint间隔的两倍,Remember Duartion = checkpoint_interval * 2</li><li>如果其下游有额外进行checkpoint的话,那么该值应该等于其最近下游的remember duration * 2 和 当前checkpoint inteval * 2的最大值</li><li>即Remember Duartion = max( children.checkpoint_interval, checkpoint_interval) * 2</li><li><code>DStream.scala</code> 关键源码如下。可以看出,当不主动设置DStream的remember duration时,其大小为checkpoint interval的两倍。同时还会递归地为父stream设置remember duration如果子类的比父类本身remember duration大。</li></ul></li></ul><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">private</span>[streaming] <span class="function"><span class="keyword">def</span> <span class="title">initialize</span></span>(time: <span class="type">Time</span>) {</span><br><span class="line"> <span class="keyword">if</span> (zeroTime != <span class="literal">null</span> && zeroTime != time) {</span><br><span class="line"> <span class="keyword">throw</span> <span class="keyword">new</span> <span class="type">SparkException</span>(<span class="string">s"ZeroTime is already initialized to <span class="subst">$zeroTime</span>"</span></span><br><span class="line"> + <span class="string">s", cannot initialize it again to <span class="subst">$time</span>"</span>)</span><br><span class="line"> }</span><br><span class="line"> zeroTime = time</span><br><span class="line"> <span class="comment">// Set the checkpoint interval to be slideDuration or 10 seconds, which ever is larger</span></span><br><span class="line"> <span class="keyword">if</span> (mustCheckpoint && checkpointDuration == <span class="literal">null</span>) {</span><br><span class="line"> checkpointDuration = slideDuration * math.ceil(<span class="type">Seconds</span>(<span class="number">10</span>) / slideDuration).toInt</span><br><span class="line"> logInfo(<span class="string">s"Checkpoint interval automatically set to <span class="subst">$checkpointDuration</span>"</span>)</span><br><span class="line"> }</span><br><span class="line"> <span class="comment">// Set the minimum value of the rememberDuration if not already set</span></span><br><span class="line"> <span class="keyword">var</span> minRememberDuration = slideDuration</span><br><span class="line"> <span class="keyword">if</span> (checkpointDuration != <span class="literal">null</span> && minRememberDuration <= checkpointDuration) {</span><br><span class="line"> <span class="comment">// times 2 just to be sure that the latest checkpoint is not forgotten (#paranoia)</span></span><br><span class="line"> minRememberDuration = checkpointDuration * <span class="number">2</span></span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">if</span> (rememberDuration == <span class="literal">null</span> || rememberDuration < minRememberDuration) {</span><br><span class="line"> rememberDuration = minRememberDuration</span><br><span class="line"> }</span><br><span class="line"> <span class="comment">// Initialize the dependencies</span></span><br><span class="line"> dependencies.foreach(_.initialize(zeroTime))</span><br><span class="line">}</span><br><span class="line"></span><br><span class="line"><span class="keyword">private</span>[streaming] <span class="function"><span class="keyword">def</span> <span class="title">remember</span></span>(duration: <span class="type">Duration</span>) {</span><br><span class="line"> <span class="keyword">if</span> (duration != <span class="literal">null</span> && (rememberDuration == <span class="literal">null</span> || duration > rememberDuration)) {</span><br><span class="line"> rememberDuration = duration</span><br><span class="line"> logInfo(<span class="string">s"Duration for remembering RDDs set to <span class="subst">$rememberDuration</span> for <span class="subst">$this</span>"</span>)</span><br><span class="line"> }</span><br><span class="line"> dependencies.foreach(_.remember(parentRememberDuration))</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li><p>假设BatchInterval=10s,在DAG图中有 A->B->C,A为DirectKafkaInputDStream,B为MappedDStream,C为StateDStream。</p><ul><li>默认情况下,只有StateDStream会进行checkpoint:<ul><li>DirectKafkaInputDStream:checkpoint interval = N/A ,remember duration = 20s</li><li>MappedDStream:checkpoint interval = N/A ,remember duration = 20s</li><li>StateDStream:checkpoint interval = 10s ,remember duration = 20s</li></ul></li><li><p>如果对MappedDStream进行了checkpoint,即 MappedDStream.checkpoint(Seconds(20))</p><ul><li>DirectKafkaInputDStream:checkpoint interval = N/A ,remember duration = 40s <ul><li>MappedDStream:checkpoint interval = 20s ,remember duration = 40s</li><li>StateDStream:checkpoint interval = 10s ,remember duration = 20s</li></ul></li></ul></li><li><p>BatchInterval = 5s,如果对MappedDStream进行了checkpoint,即 MappedDStream.checkpoint(Seconds(5))</p><ul><li>DirectKafkaInputDStream:checkpoint interval = N/A ,remember duration = 20s</li><li>MappedDStream:checkpoint interval = 5s ,remember duration = 20s</li><li>StateDStream:checkpoint interval = 10s ,remember duration = 20s</li></ul></li><li><p>如果对DirectKafkaInputDStream进行了checkpoint,即 DirectKafkaInputDStream.checkpoint(Seconds(30))</p><ul><li>DirectKafkaInputDStream:checkpoint interval = 30s,remember duration = 60s </li><li>MappedDStream:checkpoint interval = N/A ,remember duration = 20s</li><li>StateDStream:checkpoint interval = 10s ,remember duration = 20s</li></ul></li><li><p>这也为我们提供了一种调优策略,如果上游dstream设置的checkpoint间隔很短,但是占用内存很大,而下游dstream设置的checkpoint间隔很长,但是占用的内存很小。这个时候可以会以为设置上游checkpoint间隔短点,可以使其remember duration小一点,尽快清理占用的大量内存,但是很可能忽略了可能会使用下游的remember duration作为上游的remember duration,从而导致大量内存没有被释放。(当然,对于大内存也不应该频繁的进行checkpoint,这里只是举个例子说明可能出现的问题)</p></li></ul></li></ul><h2 id="问题五:reduceByKeyAndWindow-消费kafka报多线程消费错误"><a href="#问题五:reduceByKeyAndWindow-消费kafka报多线程消费错误" class="headerlink" title="问题五:reduceByKeyAndWindow 消费kafka报多线程消费错误"></a>问题五:reduceByKeyAndWindow 消费kafka报多线程消费错误</h2><p>在使用spark 2.3.0版本 reduceByKeyAndWindow 时,在某些情况下会报错多线程消费kafka错误(”java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access”)。经测试在满足以下两个条件时会出现:</p><ul><li>spark stream context 的 batch interval < windows slide Duration </li><li>executor使用的core数目>1 (yarn模式下,需要注意vcore的数目)</li><li>kafka topic 对应的 parition 个数为1</li></ul><p>在网上找了挺多资料,挺多人遇到同样的问题,也看了部分reduceByKeyAndWindow的源码,最后发现是spark实现的一个bug,只要升到2.4.0版本就不会与这个问题。</p><blockquote><p>这个问题其实花了挺长时间去找问题的原因,也试过先cache或checkpoint,但是依然无法解决这个问题。源码实现方面,reduceByKeyAndWindow的底层流实现为ReducedWindowedDStream,里面分析了previous window、current window、new rdd、old rdd等等,对old rdd运行invReduceFunc,对new rdd运行reduceFunc。<br>最终有人重写了kafka consumer解决了此问题,详见github的pr <a href="https://github.com/apache/spark/pull/20997" target="_blank" rel="noopener">Avoid concurrent use of cached consumers in CachedKafkaConsumer</a>,核心是避免使用同时一个consumer读取TopicPartition。</p></blockquote><p>参考:<br><a href="https://issues.apache.org/jira/browse/SPARK-23636" target="_blank" rel="noopener">2.4.0修复bug</a><br><a href="https://blog.csdn.net/xianpanjia4616/article/details/82811414" target="_blank" rel="noopener">KafkaConsumer is not safe for multi-threaded access</a><br><a href="https://issues.apache.org/jira/browse/SPARK-19185" target="_blank" rel="noopener">https://issues.apache.org/jira/browse/SPARK-19185</a><br><a href="https://blog.csdn.net/xianpanjia4616/article/details/86703595" target="_blank" rel="noopener">spark各种报错汇总以及解决方法</a></p>]]></content>
<tags>
<tag> spark </tag>
<tag> scala </tag>
</tags>
</entry>
<entry>
<title>golang数据库连接broken pipe异常原因分析及解决</title>
<link href="/2019/11/10/golang%E6%95%B0%E6%8D%AE%E5%BA%93%E8%BF%9E%E6%8E%A5broken-pipe%E5%BC%82%E5%B8%B8%E5%8E%9F%E5%9B%A0%E5%88%86%E6%9E%90%E5%8F%8A%E8%A7%A3%E5%86%B3.html"/>
<url>/2019/11/10/golang%E6%95%B0%E6%8D%AE%E5%BA%93%E8%BF%9E%E6%8E%A5broken-pipe%E5%BC%82%E5%B8%B8%E5%8E%9F%E5%9B%A0%E5%88%86%E6%9E%90%E5%8F%8A%E8%A7%A3%E5%86%B3.html</url>
<content type="html"><![CDATA[<blockquote><p>在golang开发中,在使用mysql数据库时一般使用数据库驱动包为 go-sql-driver/mysql,该包是按照go官方包database/sql定义规范实现的。我们线上的程序偶尔会在标准错误输出 “broken pip”,为了究其原因做了些调研,并给出了解决方法。</p></blockquote><h2 id="线上场景"><a href="#线上场景" class="headerlink" title="线上场景"></a>线上场景</h2><p>目前 <strong>项目A</strong> 使用的go-sql-driver/mysql版本为 <strong>3654d25ec346ee8ce71a68431025458d52a38ac0</strong> , <strong>项目B</strong> 使用的版本为 <strong>v1.3.0</strong> ,其中 <strong>项目A</strong> 的版本低于v1.3.0。它们线上标准错误输出都有类似以下的日志,但是程序的业务逻辑却没有影响。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">[mysql] 2019/08/01 17:12:18 packets.go:33: unexpected EOF</span><br><span class="line">[mysql] 2019/08/01 17:12:18 packets.go:130: write tcp 127.0.0.1:59722->127.0.0.1:3306: write: broken pipe</span><br></pre></td></tr></table></figure><p>通过日志输出以及堆栈可以找到 <code>go-sql-driver/mysql/packets.go</code> 对应的源码,可以发现第一条日志是以下第8行代码打印,第二条是第9行调用<code>mc.Close()</code>关闭连接时报错。</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// Read packet to buffer 'data'</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(mc *mysqlConn)</span> <span class="title">readPacket</span><span class="params">()</span> <span class="params">([]<span class="keyword">byte</span>, error)</span></span> {</span><br><span class="line"> <span class="keyword">var</span> prevData []<span class="keyword">byte</span></span><br><span class="line"> <span class="keyword">for</span> {</span><br><span class="line"> <span class="comment">// read packet header</span></span><br><span class="line"> data, err := mc.buf.readNext(<span class="number">4</span>)</span><br><span class="line"> <span class="keyword">if</span> err != <span class="literal">nil</span> {</span><br><span class="line"> errLog.Print(err)</span><br><span class="line"> mc.Close()</span><br><span class="line"> <span class="keyword">return</span> <span class="literal">nil</span>, driver.ErrBadConn</span><br><span class="line"> }</span><br><span class="line"> <span class="comment">// 省略部分代码</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><h2 id="问题复现"><a href="#问题复现" class="headerlink" title="问题复现"></a>问题复现</h2><blockquote><p>通过网上搜索能够大概猜出是mysql server主动关闭的原因,我们可以通过设置mysql server主动关闭连接来复现线上场景,并且通过tcpdump观察其原因。</p></blockquote><h3 id="1-设置mysql-server主动关闭连接时间"><a href="#1-设置mysql-server主动关闭连接时间" class="headerlink" title="1.设置mysql server主动关闭连接时间"></a>1.设置mysql server主动关闭连接时间</h3><p>mysql server默认设置的关闭不活跃连接时间为28800秒(8小时),我们通过 <code>set global wait_time=10</code> 设置为10秒,便于问题重现。</p><h3 id="2-运行tcpdump和测试demo"><a href="#2-运行tcpdump和测试demo" class="headerlink" title="2.运行tcpdump和测试demo"></a>2.运行tcpdump和测试demo</h3><p>1.通过tcpdump可以收集tcp数据包的发送接收情况,尤其是的在mysql server关闭连接后,go程序如何和mysql server交互是我们关注的重点,tcpdump命令如下:</p><p><code>sudo tcpdump -s 0 -t -i lo -l port 3306 -w lo.cap</code></p><p>2.运行一个简单的测试demo</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line"> <span class="string">"database/sql"</span></span><br><span class="line"> <span class="string">"log"</span></span><br><span class="line"> <span class="string">"time"</span></span><br><span class="line"> _ <span class="string">"github.com/go-sql-driver/mysql"</span></span><br><span class="line">)</span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> {</span><br><span class="line"> <span class="comment">// before you run this test program, please run the script in your mysql</span></span><br><span class="line"> <span class="comment">// "set global wait_timeout=10;"</span></span><br><span class="line"> <span class="comment">// 表示mysql server关闭不活跃连接的等待时间</span></span><br><span class="line"> <span class="comment">// 参考 https://github.com/go-sql-driver/mysql/issues/657</span></span><br><span class="line"> db, err := sql.Open(<span class="string">"mysql"</span>, <span class="string">"root:zhang@tcp(127.0.0.1:3306)/?charset=latin1&autocommit=1&parseTime=true&loc=Local&timeout=3s"</span>)</span><br><span class="line"> <span class="keyword">if</span> err != <span class="literal">nil</span> {</span><br><span class="line"> log.Fatal(err)</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">defer</span> db.Close()</span><br><span class="line"> <span class="comment">//db.SetConnMaxLifetime(5 * time.Second)</span></span><br><span class="line"> err = db.Ping()</span><br><span class="line"> <span class="keyword">if</span> err != <span class="literal">nil</span> {</span><br><span class="line"> log.Fatal(err)</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">go</span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> {</span><br><span class="line"> <span class="keyword">for</span> {</span><br><span class="line"> _, err := db.Exec(<span class="string">"select * from test_time.A"</span>)</span><br><span class="line"> <span class="keyword">if</span> err != <span class="literal">nil</span> {</span><br><span class="line"> log.Fatal(err)</span><br><span class="line"> }</span><br><span class="line"> <span class="comment">// Wait for 11 seconds. This should be enough to timeout the conn, since `wait_timeout` is 10s</span></span><br><span class="line"> time.Sleep(<span class="number">11</span> * time.Second)</span><br><span class="line"> }</span><br><span class="line"> }()</span><br><span class="line"> time.Sleep(<span class="number">1000</span> * time.Second)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><a id="more"></a><h3 id="3-分析tcp数据包"><a href="#3-分析tcp数据包" class="headerlink" title="3.分析tcp数据包"></a>3.分析tcp数据包</h3><p>通过wireshark打开lo.cap文件可以更加直观观察其交互情况,截图如下二图:<br><img src="/2019/11/10/golang数据库连接broken-pipe异常原因分析及解决/tcpdump1.png" alt="tcpdump1"><br><img src="/2019/11/10/golang数据库连接broken-pipe异常原因分析及解决/tcpdump2.png" alt="tcpdump2"></p><p>可以看到10秒的第222号数据包中,mysql server发送的FIN信号并且收到了golang程序第223号的ack后,进入到tcp连接中FIN_WAIT_2状态,golang程序则进入到CLOSE_WAIT状态,此时mysql server不再接受任何查询请求。同时由于golang程序应用层无法感知mysql server关闭了连接,在11秒第224号的数据包中依然向mysql server发送了查询请求,mysql server应用层发现错误,直接返回重置连接。应用程序也打印出对应的日志。</p><h3 id="小结"><a href="#小结" class="headerlink" title="小结"></a>小结</h3><p>通过复现和分析,可知根本原因是golang尝试去使用一个被mysql server主动关闭的连接。通过代码堆栈分析,还分析出<code>unexpected EOF</code>是发送查询给mysql server后读取返回结果报错,而<code>write tcp 127.0.0.1:59722->127.0.0.1:3306: write: broken pipe</code>则是读取结果报错后尝试关闭连接时失败的报错。</p><h2 id="mysql-server连接和golang数据库连接池的复用时间"><a href="#mysql-server连接和golang数据库连接池的复用时间" class="headerlink" title="mysql server连接和golang数据库连接池的复用时间"></a>mysql server连接和golang数据库连接池的复用时间</h2><blockquote><p>通过上一节,基本能确定是mysql server主动关闭连接的原因导致的,那么mysql server的 <code>wait_timeout</code>具体的定义和golang怎么解决这种问题的?对我们的业务有无影响?</p></blockquote><h3 id="1-mysql连接复用时间"><a href="#1-mysql连接复用时间" class="headerlink" title="1.mysql连接复用时间"></a>1.mysql连接复用时间</h3><p>mysql server在一定时间后将自动关闭不活跃连接。这个时间由<code>wait_timeout</code>决定,表示mysql server关闭 <strong>不活跃</strong> 连接的等待时间。<code>wait_timeout</code>配置官方说明如下。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">wait_timeout: </span><br><span class="line">The number of seconds the server waits for </span><br><span class="line">activity on a noninteractive connection before closing it.</span><br><span class="line"></span><br><span class="line">https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout</span><br></pre></td></tr></table></figure><h3 id="2-golang数据库连接池复用时间"><a href="#2-golang数据库连接池复用时间" class="headerlink" title="2.golang数据库连接池复用时间"></a>2.golang数据库连接池复用时间</h3><p>go的默认连接池不会自动关闭连接,除非通过 <code>DB.SetConnMaxLifetime()</code> 设置了连接最长的时间,一般建议该配置远小于mysql server的<code>wait_timeout</code>。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">// SetConnMaxLifetime sets the maximum amount </span><br><span class="line">// of time a connection may be reused.</span><br><span class="line">//</span><br><span class="line">// Expired connections may be closed lazily before reuse.</span><br><span class="line">//</span><br><span class="line">// If d <= 0, connections are reused forever.</span><br><span class="line">func (db *DB) SetConnMaxLifetime(d time.Duration)</span><br></pre></td></tr></table></figure><p>如果某个连接已经被mysql server关闭,而go程序无法感知,在复用该数据库连接时则会输出上述的错误日志。目前 <strong>项目A</strong> 和 <strong>项目B</strong> 的mysql驱动版本将这种错误情况返回 <code>driver.ErrBadConn</code>。</p><h3 id="3-对业务的影响"><a href="#3-对业务的影响" class="headerlink" title="3.对业务的影响"></a>3.对业务的影响</h3><blockquote><p>这种mysql server主动关闭连接的情况,对我们的业务有没影响?</p></blockquote><p>go的官方 <code>database/sql</code> 包会对返回 <code>driver.ErrBadConn</code> 的错误进行重试,这点可以通过看源码 <code>database/sql/sql.go</code> 验证。可以看到只要驱动包返回了<code>drvier.ErrBadConn</code>,那么就会进行重试2次。因此如果第一次执行失败了,那么还会进行重试,所以最终对业务不会有影响,只是标准错误输出有对应的日志输出。</p><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// maxBadConnRetries is the number of maximum retries if the driver returns</span></span><br><span class="line"><span class="comment">// driver.ErrBadConn to signal a broken connection before forcing a new</span></span><br><span class="line"><span class="comment">// connection to be opened.</span></span><br><span class="line"><span class="keyword">const</span> maxBadConnRetries = <span class="number">2</span></span><br><span class="line"></span><br><span class="line"><span class="comment">// ExecContext executes a query without returning any rows.</span></span><br><span class="line"><span class="comment">// The args are for any placeholder parameters in the query.</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(db *DB)</span> <span class="title">ExecContext</span><span class="params">(ctx context.Context, query <span class="keyword">string</span>, args ...<span class="keyword">interface</span>{})</span> <span class="params">(Result, error)</span></span> {</span><br><span class="line"> <span class="keyword">var</span> res Result</span><br><span class="line"> <span class="keyword">var</span> err error</span><br><span class="line"> <span class="keyword">for</span> i := <span class="number">0</span>; i < maxBadConnRetries; i++ {</span><br><span class="line"> res, err = db.exec(ctx, query, args, cachedOrNewConn)</span><br><span class="line"> <span class="keyword">if</span> err != driver.ErrBadConn {</span><br><span class="line"> <span class="keyword">break</span></span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">if</span> err == driver.ErrBadConn {</span><br><span class="line"> <span class="keyword">return</span> db.exec(ctx, query, args, alwaysNewConn)</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span> res, err</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h2 id="解决方案"><a href="#解决方案" class="headerlink" title="解决方案"></a>解决方案</h2><blockquote><p>测试demo见问题复现章节</p></blockquote><h3 id="解决方案一:更新mysql驱动到目前最新release版本-v1-4-1"><a href="#解决方案一:更新mysql驱动到目前最新release版本-v1-4-1" class="headerlink" title="解决方案一:更新mysql驱动到目前最新release版本-v1.4.1"></a>解决方案一:更新mysql驱动到目前最新release版本-v1.4.1</h3><p>目前最新release的版本v1.4.1包含了修改以前 <strong>激进重试策略</strong> 的提交(代码修改逻辑见 <a href="https://github.com/go-sql-driver/mysql/commit/26471af196a17ee75a22e6481b5a5897fb16b081" target="_blank" rel="noopener">commit</a>),将 <strong>许多</strong> 情况下从返回 <code>driver.ErrBadConn</code> 改为返回 <code>ErrInvalidConn</code>,减少滥用官方sql包的重试逻辑。本章讨论的mysql server主动关闭连接也在此次修改中。</p><p>跟踪源代码调用栈发现,由于连接为非阻塞socket,在mysql server关闭连接后,<code>go-sql-driver/mysql</code>还能继续write数据到socket的buffer中,并且不会立即返回错误。在<code>go-sql-driver/mysql</code>读取返回值时才从系统内核socket读取mysql server返回的错误,此时<code>go-sql-driver/mysql</code>知道mysql server返回错误了。这个时候有两个策略:</p><ul><li>一是返回driver.ErrBadConn,在官方<code>go-sql-driver/mysql</code>包进行重试,release版本-v1.3.0使用这种策略。</li><li>二是返回db error,因为从mysql应用层面来说,它认为已经发送sql成功,只是读取的时候返回错误了,这个时候它不需要重试逻辑,<strong>避免sql被重复执行</strong>。目前release版本-v1.4.0和v1.4.1就是这种策略。</li></ul><p>使用release版本-v1.4.1,经测试mysql server关闭连接后,再执行sql会输出以下日志,其中第一行为<code>go-sql-driver/mysql</code>包的标准错误输出,第二行和第三行是程序逻辑的日志输出,执行sql时返回db error,官方sql包没有进行重试。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">[mysql] 2019/08/01 17:09:41 packets.go:36: unexpected EOF</span><br><span class="line">2019/08/01 17:09:41 invalid connection</span><br><span class="line">exit status 1</span><br></pre></td></tr></table></figure><p>小结:</p><ul><li><p>优点:</p><ul><li>避免了使用激进的重试策略,符合golang定义的规范。</li><li>可以通过SetConnMaxLifetime主动设置DB使用每条连接的时间,只要SetConnMaxLifetime设置的时间比<code>wait_timeout</code>小,<code>go-sql-driver/msyql</code>就能主动关闭连接。</li></ul></li><li><p>缺点:</p><ul><li>在不设置SetConnMaxLifetime时,在mysql server关闭连接后再使用该连接机会返回db error,对目前代码冲击比较大。</li></ul></li></ul><h3 id="解决方案二:更新mysql驱动到最新的master"><a href="#解决方案二:更新mysql驱动到最新的master" class="headerlink" title="解决方案二:更新mysql驱动到最新的master"></a>解决方案二:更新mysql驱动到最新的master</h3><blockquote><p>目前master的最新提交为 877a9775f06853f611fb2d4e817d92479242d1cd,本节讨论的master基于版本</p></blockquote><p>由于mysql驱动release的版本v1.4.0和v1.4.1废除了原来激进的重试策略,不活跃连接被关闭后仍然会被golang使用,并且不再重试导致直接返回db error。故本小结探讨两个点:</p><ul><li>能否在使用连接前确认连接是否被关闭。如果已经被关闭或者有异常数据,则返回<code>driver.ErrBadConn</code>便于<code>database/sql</code>进行重试;如果没有关闭,则复用该连接。</li><li>通过某种机制避免重复执行sql。</li></ul><p>为此vicent提出了<a href="https://github.com/go-sql-driver/mysql/pull/934" target="_blank" rel="noopener">mr</a> ,目前已经被merge到<a href="https://github.com/go-sql-driver/mysql/commit/bc5e6eaa6d4fc8a7c8668050c7b1813afeafcabe" target="_blank" rel="noopener">master</a>中,但是未发布release版本,目前还在不断的优化中。简单描述一下vicent的修改要点:</p><ul><li>从pool刚拿出的连接均为不活跃的连接。</li><li>刚从pool拿出的连接,如果直接从socket读取一个字节的内容,那么一定不会从mysql服务端收到信息,因为该连接原先是不活跃连接,与mysql server没有实际的数据交换,仅仅是保持连接。如果能读到数据则表示连接有异常,返回<code>driver.ErrBadConn</code>便于<code>database/sql</code>进行重试。</li><li><p>由于Go的runtime使用的是非阻塞(O_NONBLOCK)的socket,可以在向mysql server发送数据包前,先调用read()方法做探测:</p><ul><li>1.当read返回 n=0&&err==nil,表示对端的socket已经关闭,这种情况下返回driver.ErrBadConn,便于go的sql包进行重试。</li><li>2.当read会返回n>0,表示对端的socket有异常,依然能够读取到数据,也意味连接异常,返回driver.ErrBadConn,便于go的sql包进行重试。</li><li>3.当read返回 err=EAGAIN(linux) 或者 EWOULDBLOCK(windows),表示对端的socket未关闭,这种情况下可以复用该连接。由于是非阻塞的read,当对端没有数据可读,程序不会阻塞起来等待数据就绪,而是返回EAGAIN和EWOULDBLOCK提示目前没有数据可读,请稍后再试。</li></ul></li><li><p>刚从pool拿出的连接,如果探测结果为是上面第1和第2种情况,则返回<code>driver.ErrBadConn</code>给上层go官方包便于进行重试;如果探测结果为 EAGAIN 或者 EWOULDBLOCK则可以继续使用此连接。</p></li><li>没有放进pool的连接不需要进行探测,直接复用。</li></ul><p>使用最新的master,经测试mysql server关闭连接后,再执行sql会输出以下日志,其为mysql驱动输出,并且程序不报错。在mysql client执行 <code>show full processlist</code> 能够看到两条连接的建立,即第一条连接被sever关闭后,由于探测发现连接被关闭,所以不会使用原先的连接而是重新建立连接。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">[mysql] 2019/08/01 19:45:15 packets.go:122: closing bad idle connection: EOF</span><br></pre></td></tr></table></figure><p>小结:</p><ul><li>优点:<ul><li>避免了使用激进的重试策略,符合golang定义的规范。</li><li>通过探测机制避免了在mysql server关闭连接后再使用该连接机会返回db error的影响。</li><li>可以不用设置SetConnMaxLifetime。</li></ul></li><li>缺点:<ul><li>目前master版本仍然可能对探测机制进行优化中,可以等下一个release发布再更新。</li></ul></li></ul><h3 id="解决方案三:不升级mysql驱动版本"><a href="#解决方案三:不升级mysql驱动版本" class="headerlink" title="解决方案三:不升级mysql驱动版本"></a>解决方案三:不升级mysql驱动版本</h3><p>由于目前的重试逻辑存在,我们可以不升级mysql驱动版本。虽然目前日志有错误输出,如果确认是mysql server <strong>主动关闭连接</strong> 导致的可以忽略这种错误,毕竟<code>database/sql</code>会进行重试。也可以通过SetConnMaxLifetime设置连接复用时间,到期<code>go-sql-driver/msyql</code>可以自动关闭连接。</p><p>小结:</p><ul><li>优点:<ul><li>在mysql server关闭连接后再使用该连接机不会返回db error,<code>database/sql</code>能够进行重试。</li><li>可以通过SetConnMaxLifetime主动设置DB使用每条连接的时间,只要SetConnMaxLifetime设置的时间比<code>wait_timeout</code>小,<code>go-sql-driver/msyql</code>就能主动关闭连接。</li></ul></li><li>缺点:<ul><li>使用激进的重试策略,不符合golang定义的规范,在极端情况下sql仍然可能会被执行多次。</li></ul></li></ul><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><p>本文分析线上报错的原因以及线下重现问题,通过研究golang源码解释了重试逻辑以及重试逻辑变更原因,解释了vicent探测socket改进的基本思路,最后给出了对应的解决方案:</p><p>1.如果要更新到v1.4.1版本,一定要通过SetConnMaxLifetime设置DB最长使用连接时间,并且要比mysql的<code>wait_timeout</code>小。</p><p>2.如果要使用具有探测socket功能版本,等下一个release版本,可以不用设置SetConnMaxLifetime。</p><p>3.如果暂时不需要更新,能够接受重复执行sql的风险,也最好通过SetConnMaxLifetime设置DB最长使用连接时间,并且要比mysql的<code>wait_timeout</code>小。</p><h2 id="参考链接"><a href="#参考链接" class="headerlink" title="参考链接"></a>参考链接</h2><ul><li>Server timeouts broken since #302 <a href="https://github.com/go-sql-driver/mysql/issues/657" target="_blank" rel="noopener">https://github.com/go-sql-driver/mysql/issues/657</a></li><li>packets: Check connection liveness before writing query <a href="https://github.com/go-sql-driver/mysql/pull/934" target="_blank" rel="noopener">https://github.com/go-sql-driver/mysql/pull/934</a></li><li>Go SQL client attempts write against broken connections <a href="https://github.com/go-sql-driver/mysql/issues/529" target="_blank" rel="noopener">https://github.com/go-sql-driver/mysql/issues/529</a></li><li>Check connection liveness before sending query <a href="https://github.com/go-sql-driver/mysql/issues/882" target="_blank" rel="noopener">https://github.com/go-sql-driver/mysql/issues/882</a></li><li>Golang网络库中socket阻塞调度源码剖析 <a href="https://studygolang.com/articles/4977" target="_blank" rel="noopener">https://studygolang.com/articles/4977</a></li><li>Linux中的EAGAIN含义 <a href="https://www.cnblogs.com/pigerhan/archive/2013/02/27/2935403.html" target="_blank" rel="noopener">https://www.cnblogs.com/pigerhan/archive/2013/02/27/2935403.html</a></li></ul>]]></content>
<tags>
<tag> go </tag>
<tag> mysql </tag>
</tags>
</entry>
<entry>
<title>浅谈golang对mysql时间类型数据转换的问题</title>
<link href="/2019/11/09/%E6%B5%85%E8%B0%88golang%E5%AF%B9mysql%E6%97%B6%E9%97%B4%E7%B1%BB%E5%9E%8B%E6%95%B0%E6%8D%AE%E8%BD%AC%E6%8D%A2%E7%9A%84%E9%97%AE%E9%A2%98.html"/>
<url>/2019/11/09/%E6%B5%85%E8%B0%88golang%E5%AF%B9mysql%E6%97%B6%E9%97%B4%E7%B1%BB%E5%9E%8B%E6%95%B0%E6%8D%AE%E8%BD%AC%E6%8D%A2%E7%9A%84%E9%97%AE%E9%A2%98.html</url>
<content type="html"><![CDATA[<p>部门某些业务需要在海外上线,涉及到数据库时区、应用时区的转换。本文将讨论golang针对数据库时区的处理问题。</p><blockquote><p>为了方便讨论,避免混淆,本文对“时间”的表达方式作出约定:时间=时区时间+时区。如时间 2019-05-21 15:48:38 CST ,则其时区时间为2019-05-21 15:48:38,时区为CST。如果没有特别说明,本文提到的“时间”都包含时区。</p></blockquote><h2 id="一、golang中mysql数据库驱动的时区配置"><a href="#一、golang中mysql数据库驱动的时区配置" class="headerlink" title="一、golang中mysql数据库驱动的时区配置"></a>一、golang中mysql数据库驱动的时区配置</h2><p>mysql中关于时间日期的概念数据模型有<code>DATE</code>、<code>DATETIME</code>、<code>TIMESTAMP</code>,golang程序根据数据链接DSN(Data Source Name)配置,数据库驱动 github.com/go-sql-driver/mysql 可以对这三种类型的值转换成go中的time.Time类型,关键配置如下:</p><ul><li>parseTime<ul><li>默认为false,把mysql中的 <code>DATE</code>、<code>DATETIME</code>、<code>TIMESTAMP</code> 转为golang中的[]byte类型</li><li>设置为true,将会转为golang中的 <code>time.Time</code> 类型</li></ul></li><li>loc<ul><li>默认为UTC,表示转换<code>DATE</code>、<code>DATETIME</code>、<code>TIMESTAMP</code> 为 <code>time.Time</code> 时所使用的时区</li><li>设置成Local,则与系统设置的时区一致</li><li>如果想要设置成中国时区可以设置成 <code>Asia/Shanghai</code> ,更多的时区可以参考 <code>/usr/share/zoneinfo/</code> 或者<code>$GOROOT/lib/time/zoneinfo.zip</code>。</li></ul></li></ul><p>在实际的使用中,我们往往会配置成 <code>parseTime=true</code> 和 <code>loc=Local</code>,这样避免了手动转换<code>DATE</code>、<code>DATETIME</code>、<code>TIMESTAMP</code>。</p><h2 id="二、golang如何转换mysql的时间类型"><a href="#二、golang如何转换mysql的时间类型" class="headerlink" title="二、golang如何转换mysql的时间类型"></a>二、golang如何转换mysql的时间类型</h2><blockquote><p>在涉及到不同时区时,我们golang程序应该怎么处理mysql的 DATE、DATETIME、TIMESTAMP 数据类型?是否只要配置了parseTime=true&loc=xxx就不会有问题?我们来做两个小实验。</p></blockquote><h3 id="实验一:应用和数据库在同一时区"><a href="#实验一:应用和数据库在同一时区" class="headerlink" title="实验一:应用和数据库在同一时区"></a>实验一:应用和数据库在同一时区</h3><h4 id="1-timestamp"><a href="#1-timestamp" class="headerlink" title="1.timestamp"></a>1.timestamp</h4><p>a.系统时区设置为CST,mysql和golang在同一个时区的机器上。(如何设置和查看时区可以参考本文第五节内容。)</p><ul><li>golang在程序中连接数据库使用的配置DSN是parseTime=true&loc=xxx,xxx分别为UTC、Asia/Shanghai、Europe/London、Local。</li><li>mysql终端中insert一条timestamp【时区时间】为2019-04-02 13:18:17的记录,其UNIX_TIMESTAMP(timestamp)=1554182297。</li></ul><p>以下1~5行均为golang程序读取刚插入数据库的数据结果,第一列输出分别为链接数据库DSN配置,第二列为转换为time.Time后的输出。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 13:18:17 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 13:18:17 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 13:18:17 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 13:18:17 +0800 CST</span><br></pre></td></tr></table></figure><p>b.同样的机器,修改系统时区为BST,在mysql终端中select上一步插入的数据,timestamp【时区时间】为2019-04-02 06:18:17,UNIX_TIMESTAMP(timestamp)=1554182297。程序输出为:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 06:18:17 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 06:18:17 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 06:18:17 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 06:18:17 +0100 BST</span><br></pre></td></tr></table></figure><p>c.小结:</p><ul><li>UNIX_TIMESTAMP可以把mysql的timstamp转为距离 1970-01-01 00:00:00 UTC 的秒数,这个经过转换后的值无论mysql在任何时区都不会变。</li><li>即使同一条数据库记录,由于时区不同,mysql终端中直接select出的timestamp的【时区时间】也不同。也侧面说明了mysql内部实现的timstamp结构体中包含了时区信息,在输出时根据当前时区做转换,输出当前【时区时间】。</li><li>golang程序获取到的time.Time等于:mysql【时区时间】+ 时区,时区为loc指定的时区,与mysql时区没有关系。</li></ul><a id="more"></a><h4 id="2-date"><a href="#2-date" class="headerlink" title="2.date"></a>2.date</h4><p>a.系统时区设置为CST,mysql和golang在同一个时区的机器上。</p><ul><li>golang在程序中连接数据库使用的配置DSN是parseTime=true&loc=xxx,xxx分别为UTC、Asia/Shanghai、Europe/London、Local。</li><li>mysql中insert一条【时区时间】为date=2019-04-02。</li></ul><p>程序输出为:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 00:00:00 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 00:00:00 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 00:00:00 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 00:00:00 +0800 CST</span><br></pre></td></tr></table></figure><p>b.同样的机器,修改系统时区为BST,在mysql终端中select上一步插入的数据,date【时区时间】为2019-04-02。程序输出为:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 00:00:00 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 00:00:00 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 00:00:00 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 00:00:00 +0100 BST</span><br></pre></td></tr></table></figure><p>c.小结</p><ul><li>同一条数据库记录,不管时区golang一不一样,mysql终端中select出的date始终一样。</li><li>golang程序获取到的time.Time等于:mysql时区时间 + 时区,时区为loc指定的时区,与mysql时区没有关系。</li></ul><h4 id="3-datetime"><a href="#3-datetime" class="headerlink" title="3.datetime"></a>3.datetime</h4><p>a.系统时区设置为CST,mysql和golang在同一个时区的机器上。</p><ul><li>golang在程序中连接数据库使用的配置DSN是parseTime=true&loc=xxx,xxx分别为UTC、Asia/Shanghai、Europe/London、Local。</li><li>mysql中insert一条【时区时间】为datetime=2019-04-02 13:03:01。</li></ul><p>程序输出为:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 13:03:01 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 13:03:01 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 13:03:01 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 13:03:01 +0800 CST</span><br></pre></td></tr></table></figure><p>b.同样的机器,修改系统时区为BST,在mysql终端中select上一步插入的数据,datetime【时区时间】为2019-04-02 13:03:01。程序输出为:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">parseTime=true&loc=UTC: 2019-04-02 13:03:01 +0000 UTC</span><br><span class="line">parseTime=true&loc=Asia/Shanghai: 2019-04-02 13:03:01 +0800 CST</span><br><span class="line">parseTime=true&loc=Europe/London: 2019-04-02 13:03:01 +0100 BST</span><br><span class="line">parseTime=true&loc=Local: 2019-04-02 13:03:01 +0100 BST</span><br></pre></td></tr></table></figure><p>c.小结</p><ul><li>同一条数据库记录,不管时区一不一样,mysql终端中select出的datetime始终一样。</li><li>golang程序获取到的time.Time等于:mysql时区时间 + 时区,时区为loc指定的时区,与mysql时区没有关系。</li></ul><h3 id="实验二:应用和数据库不在同一时区"><a href="#实验二:应用和数据库不在同一时区" class="headerlink" title="实验二:应用和数据库不在同一时区"></a>实验二:应用和数据库不在同一时区</h3><p>我们的国内应用需要访问海外数据库数据,假设国内机器操作系统设置为北京时间,golang程序在国内并且loc设置为Local,海外机器操作系统设置为UTC时间,海外数据库时区设置为跟随操作系统时间。</p><p>1.如果在海外mysql终端直接insert date、datetime、timestamp,在国内golang程序获取到的time.Time为 mysql【时区时间】+ CST时区,与实验一一致。</p><p>2.如果在国内golang程序中insert date、datetime、timestamp,在海外mysql客户端读取的结果为 国内【时区时间】。</p><p>3.如果在国内golang程序中insert timestamp 是通过列字段 自动更新或者通过 CURRENT_TIMESTAMP() 插入,在海外mysql客户端读取的结果为 mysql【时区时间】。</p><p>4.小结</p><ul><li>date和datetime类型不包含时区信息, <strong>mysql不会对其进行转换,存取时在mysql中相当于一个字符串</strong> 。</li><li>timestamp包含时区信息,使用时需要特别注意:<ul><li>在golang中如果插入/更新timestamp时,显式指定其时区时间,插入数据库,再取出来拼接上原来时区信息,这样存的和取的time.Time是一样的,前后不变。此时,在存取timestamp过程中也相当于一个字符串。</li><li>如果不显示指定timestamp的时区时间,而是通过 <code>CURRENT_TIMESTAMP</code> 自动更新或者通过 <code>CURRENT_TIMESTAMP()</code> 插入,那么mysql存进去的timestamp为 mysql的时区时间,取出来映射到time.Time为 mysql的时区时间+golang时区。这里有一个潜在的问题是,假设数据有A和B两个字段,它们分别是datetime类型和自动更新CURRENT_TIMESTAMP的timstamp类型,time.Now()对应数据库字段A,数据B字段不设置值,insert到数据库。在下次select出来的时候,两个字段会相差时区差个小时,这两个字段值本来应该指明同一个时间(忽略传输导致的误差), <strong>因为时区的原因引起了数据不一致</strong> 。 </li></ul></li><li>总结:<ul><li>在insert的时候,当time.Time映射到date、datetime和timestamp时,都可以认为是字符串。如果 timstamp 由mysql sever端更新,可能会有数据的一致性问题。</li><li>在select的时候,当date、datetime和timestamp 映射到时 time.Time 时,time.Time的地区时间为其字面量,时区为DSN配置的时区。</li></ul></li></ul><h2 id="三、源码分析"><a href="#三、源码分析" class="headerlink" title="三、源码分析"></a>三、源码分析</h2><blockquote><p>实验已经做完了,大概已经知道golang对mysql时间类型数据转换的方式以及可能存在的问题。那么一起从源码的角度分析此问题,加深我们对其的理解。</p></blockquote><p>1.golang中time.Time存入mysql的分析:<br>跟踪golang运行sql的源码,在运行DB.Exec()时会调用interpolateParams()方法,其调用堆栈如下。 </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">database/sql/sql.go : DB.Exec()-->DB.ExecContext()-->DB.exec()-->DB.execDC()</span><br><span class="line">database/sql/ctxutil.go : ctxDriverExec()-->execer.Exec() </span><br><span class="line">github.com/go-sql-driver/mysql/connection.go : mysqlConn.Exec() --> mysqlConn.interpolateParams()</span><br></pre></td></tr></table></figure><p>它对time.Time类型的变量会经过如下截图逻辑。可以看到golang对于time.Time类型,只会对其时区时间转为字符串,丢弃其时区信息,然后拼接到sql字符串中,所以golang存进数据库时区时间跟golang所在时区时间一致。 </p><p><img src="/2019/11/09/浅谈golang对mysql时间类型数据转换的问题/golang存time.Time源码.png" alt="golang存time.Time源码"></p><p>2.golang中取出mysql的date、datetime、timestamp映射到time.Time的分析:<br>跟踪golang运行sql的源码,发现在运行rows.Next()时会调用readRow()方法,其调用堆栈如下。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">database/sql/sql.go: Rows.Next()-->Rows.nextLocked() </span><br><span class="line">github.com/go-sql-driver/mysql/rows.go: textRows.Next()--> textRows.readRow()</span><br><span class="line">github.com/go-sql-driver/mysql/packets.go: textRows.readRow()</span><br></pre></td></tr></table></figure><p>对mysql的date、datetime、timestamp的变量经过如下逻辑。当程序发现其属于date、datetime、timestamp几种类型的一种时,就把其当成字符串进行解析,并且设置其时区为loc指定的时区。</p><p><img src="/2019/11/09/浅谈golang对mysql时间类型数据转换的问题/golang取time.Time源码.png" alt="golang取time.Time源码"></p><h2 id="四、总结"><a href="#四、总结" class="headerlink" title="四、总结"></a>四、总结</h2><p>1.可以认为timestamp在mysql中值以 UTC时区时间+UTC时区 保存。存储时对当前接受到的时间字符串进行转化,把时区时间根据当前的时区转为UTC时间再进行存储,检索时再转换回当前的时区。 </p><p>2.在mysql中date、datetime均没有时区概念。 </p><p>3.在go-sql-driver驱动中:</p><ul><li>timestamp、date、datetime在转为time.Time时,时区信息是用parseTime=true&loc=xxx中loc的值指定,需要特别注意的是timestamp在mysql中的时区信息被loc替代了。 </li><li>在time.Time转为timestamp、date、datetime时,将会把它们当做字符串,丢弃time.Time的时区信息。</li></ul><h2 id="五、参考资料"><a href="#五、参考资料" class="headerlink" title="五、参考资料"></a>五、参考资料</h2><p>1.查看mysql的时区<br>参考 <a href="https://dev.mysql.com/doc/refman/5.7/en/time-zone-support.html" target="_blank" rel="noopener">https://dev.mysql.com/doc/refman/5.7/en/time-zone-support.html</a> </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">SELECT @@GLOBAL.time_zone, @@SESSION.time_zone; </span><br><span class="line">// or </span><br><span class="line">show variables like "system_time_zone";</span><br></pre></td></tr></table></figure><p>2.linux修改时区<br>参考 <a href="http://coolnull.com/235.html" target="_blank" rel="noopener">http://coolnull.com/235.html</a> </p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">查看时区: </span><br><span class="line">zhang@debian-salmon-gb:~/Workspace/go/src/test_time$ ll /etc/localtime </span><br><span class="line">lrwxrwxrwx 1 root root 33 Nov 27 11:54 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai </span><br><span class="line"></span><br><span class="line">修改时区: </span><br><span class="line">ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime </span><br><span class="line">ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime </span><br><span class="line"> </span><br><span class="line">如何查具体的时区,如Europe/London、Asia/Shangha: </span><br><span class="line">tzselect</span><br></pre></td></tr></table></figure><p>3.MySQL中有关TIMESTAMP和DATETIME的总结<br><a href="https://www.cnblogs.com/ivictor/p/5028368.html" target="_blank" rel="noopener">https://www.cnblogs.com/ivictor/p/5028368.html</a> </p><p>4.timestamp显示为int<br>使用UNIX_TIMESTAMP(timestamp)可以把timestamp显示为数字类型的值,如1554182297,时区的改变并不会影响此值的显示;如果显示为日期时间,mysql会根据设定的时区显示时间,如CST时区显示为2019-04-02 13:18:17,东一区显示时间为2019-04-02 06:18:17 </p><p>5.go-mysql-driver中时区问题<br><a href="https://github.com/go-sql-driver/mysql/issues/203" target="_blank" rel="noopener">https://github.com/go-sql-driver/mysql/issues/203</a> </p><p>6.golang中的时间和时区<br><a href="https://studygolang.com/articles/14933" target="_blank" rel="noopener">https://studygolang.com/articles/14933</a> </p><p>7.golang mysql中timestamp,datetime,int类型的区别与优劣<br><a href="https://studygolang.com/articles/6265" target="_blank" rel="noopener">https://studygolang.com/articles/6265</a> </p>]]></content>
<tags>
<tag> go </tag>
<tag> mysql </tag>
</tags>
</entry>
<entry>
<title>effective go learning 2</title>
<link href="/2018/11/14/effective-go-learning-2.html"/>
<url>/2018/11/14/effective-go-learning-2.html</url>
<content type="html"><![CDATA[<p>从Two-dimensional slices开始,使用中文版的effctive_go学习<br><a href="https://www.kancloud.cn/kancloud/effective/72207" target="_blank" rel="noopener">https://www.kancloud.cn/kancloud/effective/72207</a></p><h2 id="Data:"><a href="#Data:" class="headerlink" title="Data:"></a>Data:</h2><h3 id="二维切片"><a href="#二维切片" class="headerlink" title="二维切片:"></a>二维切片:</h3><ul><li>Go的数组和切片都是一维的。要创建等价的二维数组或者切片,需要定义一个数组的数组或者切片的切片。</li></ul><h3 id="Maps"><a href="#Maps" class="headerlink" title="Maps:"></a>Maps:</h3><ul><li>Map是一种方便,强大的内建数据结构,其将一个类型的值(key)与另一个类型的值(element或value) 关联一起。</li><li>key可以为任何 <strong>定义了等于操作符</strong> 的类型,例如整数,浮点和复数,字符串,指针,接口(只要其动态类型支持等于操作),结构体和数组。</li><li><strong>切片不能 作为map的key,因为它们没有定义等于操作</strong>。和切片类似,<strong>map持有对底层数据结构的引用。如果将map传递给函数,其对map的内容做了改变,则这些改变对于调用者是可见的</strong>。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">attended := <span class="keyword">map</span>[<span class="keyword">string</span>]<span class="keyword">bool</span>{</span><br><span class="line"> <span class="string">"Ann"</span>: <span class="literal">true</span>,</span><br><span class="line"> <span class="string">"Joe"</span>: <span class="literal">true</span>,</span><br><span class="line"> ...}</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span> attended[person] { <span class="comment">// will be false if person is not in the map</span></span><br><span class="line"> fmt.Println(person, <span class="string">"was at the meeting"</span>)}</span><br></pre></td></tr></table></figure><ul><li>如果只测试是否在map中存在,而不关心实际的值,你可以将通常使用变量的地方换成空白标识符(_)</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">_, present := timeZone[tz]</span><br></pre></td></tr></table></figure><ul><li>要删除一个map项,使用delete内建函数,其参数为map和要删除的key。即使key已经不在map中,这样做也是安全的。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">delete</span>(timeZone, <span class="string">"PDT"</span>) <span class="comment">// Now on Standard Time</span></span><br></pre></td></tr></table></figure><ul><li><strong>map不太好判断是否存在某个key,如果key不存在返回的对应类型的零值,如果已有key的value恰好为零值会导致误判</strong></li></ul><h3 id="打印输出"><a href="#打印输出" class="headerlink" title="打印输出"></a>打印输出</h3><ul><li>Go中的格式化打印使用了与C中printf家族类似的风格,不过更加丰富和通用。这些函数位于fmt程序包中,并具有大写的名字:fmt.Printf,fmt.Fprintf,fmt.Sprintf等等。字符串函数(Sprintf等)返回一个字符串,而不是填充到提供的缓冲里。</li><li>你不需要提供一个格式串。对每个Printf,Fprintf和Sprintf,都有另外一对相应的函数,例如Print和Println。这些函数不接受格式串,而是为每个参数生成一个缺省的格式。Println版本还会在参数之间插入一个空格,并添加一个换行,而Print版本只有当两边的操作数都不是字符串的时候才增加一个空格。在这个例子中,每一行都会产生相同的输出。</li><li>格式化打印函数fmt.Fprint等,接受的第一个参数为任何一个实现了io.Writer接口的对象;变量os.Stdout和os.Stderr是常见的实例。</li><li>如果只是想要缺省的转换,像十进制整数,你可以使用 <strong>通用格式%v(代表“value”)</strong>;这正是Print和Println所产生的结果。而且,这个格式可以打印任意的的值,甚至是数组,切片,结构体和map。</li><li>当打印一个结构体时,带修饰的格式 <strong>%+v会将结构体的域使用它们的名字进行注解</strong>,对于任意的值,格式%#v会按照完整的Go语法打印出该值。</li><li>还可以通过 <strong>%q</strong> 来实现带引号的字符串格式,用于类型为 <strong>string或[]byte</strong> 的值。格式 <strong>%#q</strong> 将尽可能的使用反引号。(格式%q还用于整数和符文,产生一个带单引号的符文常量。)</li><li><strong>%x</strong> 用于字符串,字节数组和字节切片,以及整数,生成一个 <strong>长的十六进制字符串</strong>,并且如果在格式中 <strong>有一个空格(% x)</strong>,其将会在 <strong>字节中插入空格</strong>。</li><li>不要在Sprintf里面调用接收者的String方法,否则会造成无穷递归,如下。只有%s匹配才会调用MyString的String方法</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> MyString <span class="keyword">string</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(m MyString)</span> <span class="title">String</span><span class="params">()</span> <span class="title">string</span></span> {</span><br><span class="line"> <span class="keyword">return</span> fmt.Sprintf(<span class="string">"MyString=%s"</span>, m) <span class="comment">// Error: will recur forever.</span></span><br><span class="line"><span class="comment">// return fmt.Sprintf("MyString=%s", string(m)) // OK: note conversion.</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>另一种打印技术,是将一个打印程序的参数直接传递给另一个这样的程序。Printf的签名使用了类型…interface{}作为最后一个参数,来指定在格式之后可以出现任意数目的(任意类型的)参数。</li></ul><h3 id="append内建函数"><a href="#append内建函数" class="headerlink" title="append内建函数:"></a>append内建函数:</h3><ul><li>其中T为任意给定类型的占位符。你在Go中是无法写出一个类型T由调用者来确定的函数。这就是为什么append是内建的:它需要编译器的支持。append所做的事情是将元素添加到切片的结尾,并返回结果。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">append</span><span class="params">(slice []T, elements ...T)</span> []<span class="title">T</span></span></span><br><span class="line"><span class="function"></span></span><br><span class="line"><span class="function"><span class="title">x</span> := []<span class="title">int</span></span>{<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>}</span><br><span class="line">x = <span class="built_in">append</span>(x, <span class="number">4</span>, <span class="number">5</span>, <span class="number">6</span>)</span><br><span class="line">fmt.Println(x)</span><br></pre></td></tr></table></figure><ul><li>如果想要在append中把一个slice添加到另一个slice要怎么做?在调用点使用 “…”,</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">x := []<span class="keyword">int</span>{<span class="number">1</span>,<span class="number">2</span>,<span class="number">3</span>}</span><br><span class="line">y := []<span class="keyword">int</span>{<span class="number">4</span>,<span class="number">5</span>,<span class="number">6</span>}</span><br><span class="line">x = <span class="built_in">append</span>(x, y...)</span><br><span class="line">fmt.Println(x)</span><br></pre></td></tr></table></figure><ul><li>可以看出 “…” 的作用是,把一个slice转为对应的type,作为一个参数列表进行传递</li></ul><a id="more"></a><h2 id="初始化:"><a href="#初始化:" class="headerlink" title="初始化:"></a>初始化:</h2><h3 id="常量"><a href="#常量" class="headerlink" title="常量:"></a>常量:</h3><ul><li>在编译时被创建,即使被定义为函数局部的也如此,并且只能是数字,字符(符文),字符串或者布尔类型。</li><li>由于编译时的限制,定义它们的表达式必须为能被编译器求值的常量表达式。例如,1<<3是一个常量表达式,而math.Sin(math.Pi/4)不是,因为函数调用math.Sin需要在运行时才发生.</li><li>在Go中,枚举常量使用iota枚举器来创建。由于iota可以为表达式的一部分,并且表达式可以被隐式的重复,所以很容易创建复杂的值集。</li><li>Sprintf只有当想要一个字符串的时候,才调用String方法,而%f是想要一个浮点值。</li></ul><h3 id="init函数"><a href="#init函数" class="headerlink" title="init函数:"></a>init函数:</h3><ul><li>init是在 <strong>程序包中所有变量声明都被初始化</strong>,以及所有 <strong>被导入的程序包中的变量初始化之后才被调用</strong>。</li></ul><h2 id="方法:"><a href="#方法:" class="headerlink" title="方法:"></a>方法:</h2><h3 id="指针-vs-值"><a href="#指针-vs-值" class="headerlink" title="指针 vs. 值:"></a>指针 vs. 值:</h3><ul><li>关于接收者对指针和值的规则是这样的,值方法可以在指针和值上进行调用,而指针方法只能在指针上调用。</li><li>这是因为指针方法可以修改接收者;使用拷贝的值来调用它们,将会导致那些修改会被丢弃。</li></ul><h2 id="接口和其他类型:"><a href="#接口和其他类型:" class="headerlink" title="接口和其他类型:"></a>接口和其他类型:</h2><h3 id="接口"><a href="#接口" class="headerlink" title="接口:"></a>接口:</h3><ul><li>类型可以实现多个接口。例如,如果一个集合实现了sort.Interface,其包含Len(),Less(i, j int) bool和Swap(i, j int),那么它就可以通过程序包sort中的程序来进行排序,同时它还可以有一个自定义的格式器。</li></ul><h3 id="转换"><a href="#转换" class="headerlink" title="转换:"></a>转换:</h3><ul><li>因为如果我们忽略类型名字,这两个类型(Sequence和[]int)是相同的,在它们之间进行转换是合法的。该转换并不创建新的值,只不过是暂时使现有的值具有一个新的类型。(<strong>有其它的合法转换,像整数到浮点,是会创建新值的</strong>。)</li><li>将表达式的类型进行转换,来访问不同的方法集合,这在Go程序中是一种常见用法。例如,我们可以使用已有类型sort.IntSlice来将整个例子简化成这样:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">type</span> Sequence []<span class="keyword">int</span></span><br><span class="line"></span><br><span class="line"><span class="comment">// Method for printing - sorts the elements before printing</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(s Sequence)</span> <span class="title">String</span><span class="params">()</span> <span class="title">string</span></span> {</span><br><span class="line"> sort.IntSlice(s).Sort()</span><br><span class="line"> <span class="keyword">return</span> fmt.Sprint([]<span class="keyword">int</span>(s))</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>现在,Sequence没有实现多个接口(排序和打印),相反的,我们利用了能够将数据项转换为多个类型(Sequence,sort.IntSlice和[]int)的能力,每个类型完成工作的一部分。这在实际中不常见,但是却可以很有效。</li></ul><h3 id="接口转换和类型断言:"><a href="#接口转换和类型断言:" class="headerlink" title="接口转换和类型断言:"></a>接口转换和类型断言:</h3><ul><li>type-switch 语句</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> value <span class="keyword">interface</span>{} <span class="comment">// Value provided by caller.</span></span><br><span class="line"><span class="keyword">switch</span> str := value.(<span class="keyword">type</span>) {</span><br><span class="line"><span class="keyword">case</span> <span class="keyword">string</span>:</span><br><span class="line"> <span class="keyword">return</span> str</span><br><span class="line"><span class="keyword">case</span> Stringer:</span><br><span class="line"> <span class="keyword">return</span> str.String()</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>强制转换语句</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">str, ok := value.(<span class="keyword">string</span>)</span><br><span class="line"><span class="keyword">if</span> ok {</span><br><span class="line"> fmt.Printf(<span class="string">"string value is: %q\n"</span>, str)} <span class="keyword">else</span> {</span><br><span class="line"> fmt.Printf(<span class="string">"value is not a string\n"</span>)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>type-if 语句</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> str, ok := value.(<span class="keyword">string</span>); ok {</span><br><span class="line"> <span class="keyword">return</span> str</span><br><span class="line">} <span class="keyword">else</span> <span class="keyword">if</span> str, ok := value.(Stringer); ok {</span><br><span class="line"> <span class="keyword">return</span> str.String()</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="概述"><a href="#概述" class="headerlink" title="概述"></a>概述</h3><ul><li>如果一个类型只是用来实现接口,并且除了该接口以外没有其它被导出的方法,那就不需要导出这个类型。只导出接口,清楚地表明了其重要的是行为,而不是实现,并且其它具有不同属性的实现可以反映原始类型的行为。这也避免了对每个公共方法实例进行重复的文档介绍。</li></ul><h3 id="接口和方法"><a href="#接口和方法" class="headerlink" title="接口和方法:"></a>接口和方法:</h3><ul><li>由于几乎任何事物都可以附加上方法,所以几乎任何事物都能够满足接口的要求。</li><li>ArgServer现在具有和 <strong>HandlerFunc相同的签名</strong>,所以其可以被转换为那个类型:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// Argument server.</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">ArgServer</span><span class="params">(w http.ResponseWriter, req *http.Request)</span></span> {</span><br><span class="line"> fmt.Fprintln(w, os.Args)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h2 id="空白标志符:"><a href="#空白标志符:" class="headerlink" title="空白标志符:"></a>空白标志符:</h2><h3 id="空白标识符在多赋值语句中的使用"><a href="#空白标识符在多赋值语句中的使用" class="headerlink" title="空白标识符在多赋值语句中的使用:"></a>空白标识符在多赋值语句中的使用:</h3><ul><li>空白标识符在for range循环中使用的其实是其应用在多语句赋值情况下的一个特例。</li><li>一个多赋值语句需要多个左值,但假如其中某个左值在程序中并没有被使用到,那么就可以用空白标识符来占位,以避免引入一个新的无用变量。</li></ul><h3 id="未使用的导入和变量"><a href="#未使用的导入和变量" class="headerlink" title="未使用的导入和变量:"></a>未使用的导入和变量:</h3><ul><li>如果你在程序中导入了一个 <strong>包</strong> 或声明了一个 <strong>变量</strong> 却没有使用的话,会引起编译错误。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">package</span> main</span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> (</span><br><span class="line"> <span class="string">"fmt"</span></span><br><span class="line"> <span class="string">"io"</span></span><br><span class="line"> <span class="string">"log"</span></span><br><span class="line"> <span class="string">"os"</span>)</span><br><span class="line"></span><br><span class="line"><span class="keyword">var</span> _ = fmt.Printf <span class="comment">// For debugging; delete when done.</span></span><br><span class="line"><span class="keyword">var</span> _ io.Reader <span class="comment">// For debugging; delete when done.</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">main</span><span class="params">()</span></span> {</span><br><span class="line"> fd, err := os.Open(<span class="string">"test.go"</span>)</span><br><span class="line"> <span class="keyword">if</span> err != <span class="literal">nil</span> {</span><br><span class="line"> log.Fatal(err)</span><br><span class="line"> }</span><br><span class="line"> <span class="comment">// <span class="doctag">TODO:</span> use fd.</span></span><br><span class="line"> _ = fd</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>按照约定,用来临时禁止未使用导入错误的全局声明语句必须 <strong>紧随导入语句块</strong> 之后,并且需要提供相应的注释信息 —— 这些规定使得将来很容易找并删除这些语句。</li></ul><h3 id="副作用式导入"><a href="#副作用式导入" class="headerlink" title="副作用式导入:"></a>副作用式导入:</h3><ul><li>像上面例子中的导入的包,fmt或io,最终要么被使用,要么被删除:使用空白标识符只是一种临时性的举措。但有时,导入一个包仅仅是为了引入一些副作用,而不是为了真正使用它们。</li><li>例如,net/http/pprof包会在其导入阶段调用init函数,该函数注册HTTP处理程序以提供调试信息。这个包中确实也包含一些导出的API,但大多数客户端只会通过注册处理函数的方式访问web页面的数据,而不需要使用这些API。</li><li>为了实现仅为副作用而导入包的操作,可以在导入语句中,将包用空白标识符进行重命名:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> _ <span class="string">"net/http/pprof"</span></span><br></pre></td></tr></table></figure><ul><li>这一种非常干净的导入包的方式,由于在当前文件中,<strong>被导入的包是匿名的</strong>,因此你无法访问包内的任何符号。</li></ul><p>接口检查:</p><ul><li>一个类型不需要明确的声明它实现了某个接口。一个类型要实现某个接口,只需要实现该接口对应的方法就可以了。</li><li>在实际中,多数接口的类型转换和检查都是在编译阶段静态完成的。<ul><li>其中一个例子来自encoding/json包内定义的Marshaler接口。</li><li>当JSON编码器接收到一个实现了Marshaler接口的参数时,就调用该参数的marshaling方法来代替标准方法处理JSON编码。编码器利用类型断言机制在运行时进行类型检查:</li></ul></li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">m, ok := val.(json.Marshaler)</span><br></pre></td></tr></table></figure><ul><li>假设我们只是想知道某个类型是否实现了某个接口,而实际上并不需要使用这个接口本身 —— 例如在一段错误检查代码中 —— 那么可以使用空白标识符来忽略类型断言的返回值:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> _, ok := val.(json.Marshaler); ok {</span><br><span class="line"> fmt.Printf(<span class="string">"value %v of type %T implements json.Marshaler\n"</span>, val, val)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>在某些情况下,我们必须在包的内部确保某个类型确实满足某个接口的定义。例如类型json.RawMessage,如果它要提供一种定制的JSON格式,就必须实现json.Marshaler接口,但是编译器不会自动对其进行静态类型验证。如果该类型在实现上没有充分满足接口定义,JSON编码器仍然会工作,只不过不是用定制的方式。为了确保接口实现的正确性,可以在包内部,利用空白标识符进行一个全局声明:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> _ json.Marshaler = (*RawMessage)(<span class="literal">nil</span>)</span><br></pre></td></tr></table></figure><ul><li>在该声明中,赋值语句导致了从 <strong>*RawMessage到Marshaler的类型转换</strong>,这要求 <strong>*RawMessage必须正确实现了Marshaler接口</strong> ,该属性将在编译期间被检查。当json.Marshaler接口被修改后,上面的代码将无法正确编译,因而很容易发现错误并及时修改代码。</li><li>在这个结构中出现的空白标识符,表示了该声明语句仅仅是为了触发编译器进行类型检查,而非创建任何新的变量。但是,也不需要对所有满足某接口的类型都进行这样的处理。按照约定,这类声明仅当代码中没有其他静态转换时才需要使用,这类情况通常很少出现。</li></ul><h2 id="内嵌:"><a href="#内嵌:" class="headerlink" title="内嵌:"></a>内嵌:</h2><ul><li>接口只能“内嵌”接口类型。</li><li>在“内嵌”和“子类型”两种方法间存在一个重要的区别。当我们内嵌一个类型时,该类型的所有方法会变成外部类型的方法,但是当这些方法被调用时,其接收的参数仍然是内部类型,而非外部类型。在本例中,一个bufio.ReadWriter类型的Read方法被调用时,其效果和调用我们刚刚实现的那个Read方法是一样的,只不过前者接收的参数是ReadWriter的reader字段,而不是ReadWriter本身。</li></ul><h2 id="并发:"><a href="#并发:" class="headerlink" title="并发:"></a>并发:</h2><h3 id="以通信实现共享:"><a href="#以通信实现共享:" class="headerlink" title="以通信实现共享:"></a>以通信实现共享:</h3><ul><li>Go语言鼓励开发者采用一种不同的方法,即将共享 变量通过Channel相互传递 —— 事实上并没有真正在不同的执行线程间共享数据 —— 的方式解决上述问题。在任意时刻,仅有一个Goroutine可以访问某个变量。数据竞争问题在设计上就被规避了。</li></ul><h3 id="Goroutines"><a href="#Goroutines" class="headerlink" title="Goroutines:"></a>Goroutines:</h3><ul><li>每个Goroutine都对应一个非常简单的模型:它是一个并发的函数执行线索,并且在多个并发的Goroutine间,资 源是共享的。</li><li>Goroutine非常轻量,创建的开销不会比栈空间分配的开销大多少。并且其初始栈空间很小 —— 这也就是它轻量的原因 —— 在后续执行中,会根据需要在堆空间分配(或释放)额外的栈空间。</li><li>闭包(closure):实现保证了在这类函数中被 <strong>引用的变量在函数结束之前不会被释放</strong>。</li></ul><h3 id="Channel"><a href="#Channel" class="headerlink" title="Channel:"></a>Channel:</h3><ul><li>与map结构类似,channel也是通过make进行分配的,其返回值实际上是一个指向底层相关数据结构的引用。</li><li>如果在创建channel时提供一个可选的整型参数,会设置该channel的缓冲区大小。该值缺省为0,用来构建默认的“无缓冲channel”,也称为“同步channel”。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">ci := <span class="built_in">make</span>(<span class="keyword">chan</span> <span class="keyword">int</span>) <span class="comment">// unbuffered channel of integers</span></span><br><span class="line">cj := <span class="built_in">make</span>(<span class="keyword">chan</span> <span class="keyword">int</span>, <span class="number">0</span>) <span class="comment">// unbuffered channel of integers</span></span><br><span class="line">cs := <span class="built_in">make</span>(<span class="keyword">chan</span> *os.File, <span class="number">100</span>) <span class="comment">// buffered channel of pointers to Files</span></span><br></pre></td></tr></table></figure><ul><li>无缓冲的channel使得通信—值的交换—和同步机制组合—共同保证了两个执行线索(Goroutines)运行于可控的状态。</li><li><strong>循环的迭代变量会在循环中被重用</strong>,因此req变量会在所有Goroutine间共享。</li><li>为了避免在多个goroutine中贡献变量,可以把参数用函数参数的形式传入,可以创建一个新的同名变量,如下。但它确实是合法的并且在Go中是一种惯用的方法。你可以如法泡制一个新的同名变量,用来为每个Goroutine创建循环变量的私有拷贝。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Serve</span><span class="params">(queue <span class="keyword">chan</span> *Request)</span></span> {</span><br><span class="line"> <span class="keyword">for</span> req := <span class="keyword">range</span> queue {</span><br><span class="line"> <-sem</span><br><span class="line"> req := req <span class="comment">// Create new instance of req for the goroutine.</span></span><br><span class="line"> <span class="keyword">go</span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> {</span><br><span class="line"> process(req)</span><br><span class="line"> sem <- <span class="number">1</span></span><br><span class="line"> }()</span><br><span class="line"> }}</span><br></pre></td></tr></table></figure><h3 id="Channel类型的Channel"><a href="#Channel类型的Channel" class="headerlink" title="Channel类型的Channel:"></a>Channel类型的Channel:</h3><ul><li>Channel在Go语言中是一个 first-class 类型,这意味着channel可以像其他 first-class 类型变量一样进行分配、传递。该属性的一个常用方法是用来实现安全、并行的解复用(demultiplexing)处理。</li></ul><h3 id="并行"><a href="#并行" class="headerlink" title="并行:"></a>并行:</h3><ul><li>对于用户态任务,我们默认仅提供一个物理CPU进行处理。任意数目的Goroutine可以阻塞在系统调用上,但 <strong>默认情况下,在任意时刻,只有一个Goroutine</strong> 可以被调度执行。</li><li>目前,你必须通过 <strong>设置GOMAXPROCS环境变量</strong> 或者 <strong>导入runtime包并调用runtime.GOMAXPROCS(NCPU)</strong>, 来告诉Go的运行时系统最大并行执行的Goroutine数目。</li><li><strong>可以通过runtime.NumCPU()</strong> 获得当前运行系统的逻辑核数,作为一个有用的参考。需要重申:上述方法可能会随我们对实现的完善而最终被淘汰。</li><li>注意不要把“并发”和“并行”这两个概念搞混:“并发”是指用一些彼此独立的执行模块构建程序;而“并行”则是指通过将计算任务在多个处理器上同时执行以 提高效率。尽管对于一些问题,我们可以利用“并发”特性方便的构建一些并行的程序部件,但是Go终究是一门“并发”语言而非“并行”语言,并非所有的并行 编程模式都适用于Go语言模型。</li></ul><h2 id="错误:"><a href="#错误:" class="headerlink" title="错误:"></a>错误:</h2><ul><li>向调用者返回某种形式的错误信息是库历程必须提供的一项功能。通过前面介绍的函数多返回值的特性,Go中的错误信息可以很容易同正常情况下的返回值一起返回给调用者。</li><li>对于需要精确分析错误信息的调用者,可以通过类型开关或类型断言的方式查看具体的错误并深入错误的细节。就PathErrors类型而言,这些细节信息包含在一个内部的Err字段中,可以被用来进行错误恢复。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> try := <span class="number">0</span>; try < <span class="number">2</span>; try++ {</span><br><span class="line"> file, err = os.Create(filename)</span><br><span class="line"> <span class="keyword">if</span> err == <span class="literal">nil</span> {</span><br><span class="line"> <span class="keyword">return</span></span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">if</span> e, ok := err.(*os.PathError); ok && e.Err == syscall.ENOSPC {</span><br><span class="line"> deleteTempFiles() <span class="comment">// Recover some space.</span></span><br><span class="line"> <span class="keyword">continue</span></span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">return</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>第二个if语句是另一种形式的类型断言。如该断言失败,ok的值将为false且e的值为nil。如果断言成功,则ok值为true,说明当前的错误,也就是e,属于*os.PathError类型,因而可以进一步获取更多的细节信息。</li></ul><h3 id="严重故障(Panic)"><a href="#严重故障(Panic)" class="headerlink" title="严重故障(Panic):"></a>严重故障(Panic):</h3><ul><li>通常来说,向调用者报告错误的方式就是返回一个额外的error变量: Read方法就是一个很好的例子;该方法返回一个字节计数值和一个error变量。但是对于那些不可恢复的错误,比如错误发生后程序将不能继续执行的情况,该如何处理呢?</li><li>为了解决上述问题,Go语言提供了一个内置的 <strong>panic方法</strong>,用来 <strong>创建一个运行时错误并结束当前程序</strong>(关于退出机制,下一节还有进一步介绍)。该函数接受一个任意类型的参数,并在程序挂掉之前打印该参数内容,通常我们会选择一个字符串作为参数。方法panic还适用于指示一些程序中的不可达状态,比如从一个无限循环中退出。</li><li>在实际的库设计中,应尽量避免使用panic。如果程序错误可以以某种方式掩盖或是绕过,那么最好还是继续执行而不是让整个程序终止。不过还是有一些反例的,比方说,如果库历程确实没有办法正确完成其初始化过程,那么触发panic退出可能就是一种更加合理的方式。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> user = os.Getenv(<span class="string">"USER"</span>)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">init</span><span class="params">()</span></span> {</span><br><span class="line"> <span class="keyword">if</span> user == <span class="string">""</span> {</span><br><span class="line"> <span class="built_in">panic</span>(<span class="string">"no value for $USER"</span>)</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="恢复(Recover)"><a href="#恢复(Recover)" class="headerlink" title="恢复(Recover):"></a>恢复(Recover):</h3><ul><li>对于一些隐式的运行时错误,如切片索引越界、类型断言错误等情形下,panic方法就会被调用,它将 <strong>立刻中断当前函数的执行,并展开当前Goroutine的调用栈,依次执行之前注册的defer函数。当栈展开操作达到该Goroutine栈顶端时,程序将终止</strong>。但这时仍然 <strong>可以使用Go的内建recover方法重新获得Goroutine的控制权,并将程序恢复到正常执行的状态</strong>。</li><li>调用recover方法会终止栈展开操作并返回之前传递给panic方法的那个参数。由于在栈展开过程中,只有defer型函数会被执行,因此recover的调用必须置于defer函数内才有效。</li><li>在下面的示例应用中,调用recover方法会终止server中失败的那个Goroutine,但server中其它的Goroutine将继续执行,不受影响。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">server</span><span class="params">(workChan <-<span class="keyword">chan</span> *Work)</span></span> {</span><br><span class="line"> <span class="keyword">for</span> work := <span class="keyword">range</span> workChan {</span><br><span class="line"> <span class="keyword">go</span> safelyDo(work)</span><br><span class="line"> }}</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">safelyDo</span><span class="params">(work *Work)</span></span> {</span><br><span class="line"> <span class="keyword">defer</span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> {</span><br><span class="line"> <span class="keyword">if</span> err := <span class="built_in">recover</span>(); err != <span class="literal">nil</span> {</span><br><span class="line"> log.Println(<span class="string">"work failed:"</span>, err)</span><br><span class="line"> }</span><br><span class="line"> }()</span><br><span class="line"> do(work)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>在这里例子中,如果do(work)调用发生了panic,则其结果 <strong>将被记录且发生错误的那个Goroutine将干净的退出</strong>,不会干扰其他Goroutine。你不需要在defer指示的闭包中做别的操作,仅需调用recover方法,它将帮你搞定一切。</li><li>只有直接在defer函数中调用recover方法,才会返回非nil的值,因此defer函数的代码可以调用那些本身 <strong>使用了panic和recover的库函数</strong> 而不会引发错误。还用上面的那个例子说明:safelyDo里的defer函数在调用recover之前可能调用了一个日志记录函数,而日志记录程序的执行将不受panic状态的影响。(这段话的意思讨论的是,在defer函数中需要使用其他库函数时,如果该库函数也使用了panic和recover来优雅退出自身的函数调用链,那么将不会影响defer函数中panic的状态;如果未使用相关的技术,那么将会污染/影响defer函数对panic判断。recover返回空则未panic,返回非空则panic)</li><li>有了错误恢复的模式,do函数及其调用的代码可以通过调用panic方法,以 <strong>一种很干净的方式从错误状态中恢复</strong>。我们可以使用该特性为那些复杂的软件实现更加简洁的错误处理代码。</li><li>让我们来看下面这个例子,它是regexp包的一个简化版本,它通过调用panic并传递一个局部错误类型来报告“解析错误”(Parse Error)。下面的代码包括了Error类型定义,error处理方法以及Compile函数:</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// Error is the type of a parse error; it satisfies the error interface.</span></span><br><span class="line"><span class="keyword">type</span> Error <span class="keyword">string</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(e Error)</span> <span class="title">Error</span><span class="params">()</span> <span class="title">string</span></span> {</span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">string</span>(e)}</span><br><span class="line"></span><br><span class="line"><span class="comment">// error is a method of *Regexp that reports parsing errors by// panicking with an Error.</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="params">(regexp *Regexp)</span> <span class="title">error</span><span class="params">(err <span class="keyword">string</span>)</span></span> {</span><br><span class="line"> <span class="built_in">panic</span>(Error(err))}</span><br><span class="line"></span><br><span class="line"><span class="comment">// Compile returns a parsed representation of the regular expression.</span></span><br><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Compile</span><span class="params">(str <span class="keyword">string</span>)</span> <span class="params">(regexp *Regexp, err error)</span></span> {</span><br><span class="line"> regexp = <span class="built_in">new</span>(Regexp)</span><br><span class="line"> <span class="comment">// doParse will panic if there is a parse error.</span></span><br><span class="line"> <span class="keyword">defer</span> <span class="function"><span class="keyword">func</span><span class="params">()</span></span> {</span><br><span class="line"> <span class="keyword">if</span> e := <span class="built_in">recover</span>(); e != <span class="literal">nil</span> {</span><br><span class="line"> regexp = <span class="literal">nil</span> <span class="comment">// Clear return value.</span></span><br><span class="line"> err = e.(Error) <span class="comment">// Will re-panic if not a parse error.</span></span><br><span class="line"> }</span><br><span class="line"> }()</span><br><span class="line"> <span class="keyword">return</span> regexp.doParse(str), <span class="literal">nil</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>如果doParse方法触发panic,错误恢复代码会将返回值置为nil—因为defer函数可以修改命名的返回值变量;然后,错误恢复代码会对返回的错误类型进行类型断言,<strong>判断其是否属于Error类型</strong>。如果类型断言失败,则会引发运行时错误,并继续进行栈展开,最后终止程序 —— 这个过程将不再会被中断。类型检查失败可能意味着程序中还有其他部分触发了panic,如果某处存在索引越界访问等,因此,即使我们已经使用了panic和recover机制来处理解析错误,程序依然会异常终止。(err = e.(Err)是上面代码的关键部分,如果断言失败,则意味着不是本包有意抛出的panic,因此应该继续向上抛出直至被再次捕捉或者最终终止程序;panic(Error(err)) 这句代码对err进行了类型转换,并传入panic函数中)</li><li>有了上面的错误处理过程,调用error方法(由于它是一个类型的绑定的方法,因而即使与内建类型error同名,也不会带来什么问题,甚至是一直更加自然的用法)使得“解析错误”的报告更加方便,无需费心去考虑手工处理栈展开过程的复杂问题。</li><li>上面这种模式的妙处在于,<strong>它完全被封装在模块的内部</strong>,Parse方法将其 <strong>内部对panic的调用隐藏在error之中</strong>;而不会将panics信息暴露给外部使用者。这是一个 <strong>设计良好且值得学习的编程技巧</strong>。</li><li>这样做的缺点是:<ul><li>顺便说一下,当确实有错误发生时,我们习惯采取的“重新触发panic”(re-panic)的方法会改变panic的值。但 <strong>新旧错误信息都会出现在崩溃 报告中(上面新错误信息为: interface conversion: interface {} is xxx, not main.Error)</strong>,引发错误的原始点仍然可以找到。所以,通常这种简单的重新触发panic的机制就足够了—所有这些错误最终导致了程序的崩溃 <strong>(可以通过查阅调用栈的方式找到真正发生错误的地方)</strong>—但是如果只想显示最 初的错误信息的话,你就需要稍微多写一些代码来过滤掉那些由重新触发引入的多余信息。这个功能就留给读者自己去实现吧!</li></ul></li></ul>]]></content>
<tags>
<tag> go </tag>
</tags>
</entry>
<entry>
<title>go系列-中间件</title>
<link href="/2018/09/29/go%E7%B3%BB%E5%88%97-%E4%B8%AD%E9%97%B4%E4%BB%B6.html"/>
<url>/2018/09/29/go%E7%B3%BB%E5%88%97-%E4%B8%AD%E9%97%B4%E4%BB%B6.html</url>
<content type="html"><![CDATA[<h2 id="go中间件"><a href="#go中间件" class="headerlink" title="go中间件"></a>go中间件</h2><p>最近看代码看到go中间件的代码,遂搜相关代码以及类似的框架进行学习 </p><h3 id="什么是中间件"><a href="#什么是中间件" class="headerlink" title="什么是中间件"></a>什么是中间件</h3><ul><li><p>了解中间件前需要了解 ServeMux、DefaultServeMux、http.Handler、http.HandlerFunc、mux.HandleFunc、ServeHTTP 等相关知识和它们之间的关系[推荐]<br><a href="https://www.alexedwards.net/blog/a-recap-of-request-handling" target="_blank" rel="noopener">https://www.alexedwards.net/blog/a-recap-of-request-handling</a></p></li><li><p>context能做什么<br><a href="https://blog.questionable.services/article/map-string-interface/" target="_blank" rel="noopener">https://blog.questionable.services/article/map-string-interface/</a></p></li></ul><h3 id="gorilla系列-amp-Negroni"><a href="#gorilla系列-amp-Negroni" class="headerlink" title="gorilla系列 & Negroni"></a>gorilla系列 & Negroni</h3><ul><li><p>Go实战–Golang中http中间件(goji/httpauth、urfave/negroni、gorilla/handlers、justinas/alice)<br><a href="https://blog.csdn.net/wangshubo1989/article/details/79227443" target="_blank" rel="noopener">https://blog.csdn.net/wangshubo1989/article/details/79227443</a></p></li><li><p>Go实战–Gorilla web toolkit使用之gorilla/handlers<br><a href="https://blog.csdn.net/wangshubo1989/article/details/78970282" target="_blank" rel="noopener">https://blog.csdn.net/wangshubo1989/article/details/78970282</a></p></li><li><p>Go实战–Gorilla web toolkit使用之gorilla/context<br><a href="https://blog.csdn.net/wangshubo1989/article/details/78910935" target="_blank" rel="noopener">https://blog.csdn.net/wangshubo1989/article/details/78910935</a></p></li><li><p>gorilla/mux <a href="https://github.com/gorilla/mux" target="_blank" rel="noopener">https://github.com/gorilla/mux</a> (需要着重看一下 文中的Graceful Shutdown和 <a href="https://github.com/gorilla/mux#graceful-shutdown)" target="_blank" rel="noopener">https://github.com/gorilla/mux#graceful-shutdown)</a></p></li><li><p>Negroni <a href="https://github.com/urfave/negroni" target="_blank" rel="noopener">https://github.com/urfave/negroni</a></p></li></ul><p>[代码学习](<a href="https://github.com/salmon7/go-learning/tree/master/middle" target="_blank" rel="noopener">https://github.com/salmon7/go-learning/tree/master/middle</a></p><h3 id="justinas-alice"><a href="#justinas-alice" class="headerlink" title="justinas/alice"></a>justinas/alice</h3><p>// TO-DO</p>]]></content>
<tags>
<tag> go </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(1)-基础知识</title>
<link href="/2018/09/27/gitlab-ci-docker-compose(1)-%E5%9F%BA%E7%A1%80%E7%9F%A5%E8%AF%86.html"/>
<url>/2018/09/27/gitlab-ci-docker-compose(1)-%E5%9F%BA%E7%A1%80%E7%9F%A5%E8%AF%86.html</url>
<content type="html"><![CDATA[<p>工作中需要使用到gitlab ci和docker-compose,而docker是这两者的前提。网上docker学习资料有很多,但是有一大部分是过时的,官网也有详细的文档,但是快速阅读起来比较慢,毕竟不是母语。之前学习的时候走了很多弯路,现把最近搜集到比较好的资料分享出来,希望对大家有帮助</p><h2 id="docker学习资料"><a href="#docker学习资料" class="headerlink" title="docker学习资料"></a>docker学习资料</h2><p>Docker — 从入门到实践(一个不错的docker入门教程,极力推荐):</p><ul><li><a href="https://github.com/yeasy/docker_practice" target="_blank" rel="noopener">https://github.com/yeasy/docker_practice</a></li><li><a href="https://docker_practice.gitee.io/" target="_blank" rel="noopener">https://docker_practice.gitee.io/</a></li></ul><p>Docker 问答录(100 问):<a href="https://blog.lab99.org/post/docker-2016-07-14-faq.html" target="_blank" rel="noopener">https://blog.lab99.org/post/docker-2016-07-14-faq.html</a></p><p>dockerfile中:<a href="https://docs.docker.com/engine/reference/builder/" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/</a></p><p>dockerfile的最佳实践:<a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" target="_blank" rel="noopener">https://docs.docker.com/develop/develop-images/dockerfile_best-practices/</a></p><p>docker之Dockerfile实践(以nginx为例,一步一步构建镜像):<a href="http://www.cnblogs.com/jsonhc/p/7767669.html" target="_blank" rel="noopener">http://www.cnblogs.com/jsonhc/p/7767669.html</a></p><h2 id="docker-compose学习资料"><a href="#docker-compose学习资料" class="headerlink" title="docker-compose学习资料"></a>docker-compose学习资料</h2><p>docker-compose中的environment:<a href="https://docs.docker.com/compose/compose-file/#environment" target="_blank" rel="noopener">https://docs.docker.com/compose/compose-file/#environment</a></p><p>docker-compose中的变量:<a href="https://docs.docker.com/compose/compose-file/#variable-substitution" target="_blank" rel="noopener">https://docs.docker.com/compose/compose-file/#variable-substitution</a></p><p>docker-comppose中的变量的优先顺序:<a href="https://docs.docker.com/compose/environment-variables/" target="_blank" rel="noopener">https://docs.docker.com/compose/environment-variables/</a></p><p>Declare default environment variables in file:<a href="https://docs.docker.com/compose/env-file/" target="_blank" rel="noopener">https://docs.docker.com/compose/env-file/</a></p><p>ENTRYPOINT的shell模式的副作用:<a href="https://docs.docker.com/engine/reference/builder/#entrypoint" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/#entrypoint</a></p><p>vishnubob/wait-for-it <a href="https://github.com/vishnubob/wait-for-it" target="_blank" rel="noopener">https://github.com/vishnubob/wait-for-it</a></p><p>dockfile最佳实践:<a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" target="_blank" rel="noopener">https://docs.docker.com/develop/develop-images/dockerfile_best-practices/</a></p><h2 id="gitlab-ci学习资料"><a href="#gitlab-ci学习资料" class="headerlink" title="gitlab-ci学习资料"></a>gitlab-ci学习资料</h2><p>这部分学习大部分都是官网,不做过多的分享,只留一个默认变量的链接</p><p>ci的默认变量:<a href="https://docs.gitlab.com/ce/ci/variables/README.html" target="_blank" rel="noopener">https://docs.gitlab.com/ce/ci/variables/README.html</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(6)-容器启动先后顺序</title>
<link href="/2018/09/26/gitlab-ci-docker-compose(6)-%E5%AE%B9%E5%99%A8%E5%90%AF%E5%8A%A8%E5%85%88%E5%90%8E%E9%A1%BA%E5%BA%8F.html"/>
<url>/2018/09/26/gitlab-ci-docker-compose(6)-%E5%AE%B9%E5%99%A8%E5%90%AF%E5%8A%A8%E5%85%88%E5%90%8E%E9%A1%BA%E5%BA%8F.html</url>
<content type="html"><![CDATA[<h2 id="docker-compose容器启动先后顺序问题"><a href="#docker-compose容器启动先后顺序问题" class="headerlink" title="docker-compose容器启动先后顺序问题"></a>docker-compose容器启动先后顺序问题</h2><p>App应用程序容器需要连接一个mysql容器,使用docker-compose启动容器组,应该怎么做?在docker-compose中,容器A依赖容器B,B容器会先启动,然后再启动A容器,但是B容器不一定初始化完毕对外服务。</p><p>先来看一段mysql官方对于这种问题的两段说明</p><blockquote><p>No connections until MySQL init completes</p><ul><li>If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as docker-compose, which start several containers simultaneously.</li><li>If the application you’re trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then a putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see WordPress or Bonita.</li></ul></blockquote><p>所以一般有两种方法解决类似的问题</p><ul><li>程序层面改进,程序连接mysql部分需要有重连机制</li><li>连接容器改进,在shell命令中判断mysql容器是否启动,如果未启动设定时间等待,如果启动了再启动应用程序</li></ul><p><em>这里只说明第二种方法</em></p><h3 id="使用wait-for-it-sh"><a href="#使用wait-for-it-sh" class="headerlink" title="使用wait-for-it.sh"></a>使用wait-for-it.sh</h3><figure class="highlight yml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># docker-compose.yml</span></span><br><span class="line"><span class="attr">version:</span> <span class="string">'3'</span></span><br><span class="line"></span><br><span class="line"><span class="attr">services:</span></span><br><span class="line"><span class="attr"> app:</span></span><br><span class="line"><span class="attr"> build:</span></span><br><span class="line"><span class="attr"> context:</span> <span class="string">.</span></span><br><span class="line"><span class="attr"> dockerfile:</span> <span class="string">DockerfileAPP</span></span><br><span class="line"><span class="attr"> image:</span> <span class="attr">app:latest</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">"127.0.0.1:8080:8080"</span></span><br><span class="line"><span class="attr"> depends_on:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">mysql_db</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">/go/src/app/wait-for-it.sh</span> <span class="attr">mysql_db:3306</span> <span class="bullet">-s</span> <span class="bullet">-t</span> <span class="number">30</span> <span class="bullet">--</span> <span class="string">/go/src/app/app-release</span> <span class="string">start</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> mysql_db:</span></span><br><span class="line"> <span class="comment">#build:</span></span><br><span class="line"> <span class="comment"># context: .</span></span><br><span class="line"> <span class="comment"># dockerfile: DockerfileMySQL</span></span><br><span class="line"> <span class="comment">#image: mysql_db:latest</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">"mysql:5.7.22"</span></span><br><span class="line"><span class="attr"> environment:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">MYSQL_ROOT_PASSWORD=test</span></span><br><span class="line"><span class="attr"> expose:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">"3306"</span></span><br><span class="line"><span class="attr"> volume:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">./init_sql_script/:/docker-entrypoint-initdb.d/</span></span><br></pre></td></tr></table></figure><p>注意这行代码</p><blockquote><p>command: /go/src/app/wait-for-it.sh mysql_db:3306 -s -t 30 – /go/src/app/app-release start</p></blockquote><p>-s 表示如果没有检测到host为mysql_db的3306端口,则不执行后面的命令;-t 30表示超时时间为30秒,更多配置见参考。wait-for-it.sh可以使用多次,比如需要等待msyql和redis,可以这么写<code>command: /go/src/app/wait-for-it.sh mysql_db:3306 -s -t 30 -- command: /go/src/app/wait-for-it.sh redis:6379 -s -t 30 -- /go/src/app/app-release start</code></p><p>参考:</p><p>mysql官方docker说明:<a href="https://hub.docker.com/_/mysql/" target="_blank" rel="noopener">https://hub.docker.com/_/mysql/</a></p><p>Control startup order in Compose:<a href="https://docs.docker.com/compose/startup-order/" target="_blank" rel="noopener">https://docs.docker.com/compose/startup-order/</a></p><p>vishnubob/wait-for-it:<a href="https://github.com/vishnubob/wait-for-it" target="_blank" rel="noopener">https://github.com/vishnubob/wait-for-it</a></p><p>Docker-compose check if mysql connection is ready:<a href="https://stackoverflow.com/questions/42567475/docker-compose-check-if-mysql-connection-is-ready" target="_blank" rel="noopener">https://stackoverflow.com/questions/42567475/docker-compose-check-if-mysql-connection-is-ready</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(5)-容器启动环境变量传递</title>
<link href="/2018/09/25/gitlab-ci-docker-compose(5)-%E5%AE%B9%E5%99%A8%E5%90%AF%E5%8A%A8%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E4%BC%A0%E9%80%92.html"/>
<url>/2018/09/25/gitlab-ci-docker-compose(5)-%E5%AE%B9%E5%99%A8%E5%90%AF%E5%8A%A8%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F%E4%BC%A0%E9%80%92.html</url>
<content type="html"><![CDATA[<h2 id="docker-run和docker-compose启动容器环境变量传递"><a href="#docker-run和docker-compose启动容器环境变量传递" class="headerlink" title="docker run和docker-compose启动容器环境变量传递"></a>docker run和docker-compose启动容器环境变量传递</h2><h3 id="dockrfile中的ENV和CMD的关系(不考虑ENTRYPOINT)"><a href="#dockrfile中的ENV和CMD的关系(不考虑ENTRYPOINT)" class="headerlink" title="dockrfile中的ENV和CMD的关系(不考虑ENTRYPOINT)"></a>dockrfile中的ENV和CMD的关系(不考虑ENTRYPOINT)</h3><ul><li><p>docker run使用默认命令 && dockerfile中 ENV VERSION=100, CMD [“echo”,”$VERSION”]:输出空</p></li><li><p>docker run使用默认命令 && dockrfile中 ENV VERSION=100,CMD [“sh”,”-c”,”echo”,”$VERSION”]:输出空</p></li><li><p>docker run使用默认命令 && dockrfile中 ENV VERSION=100,CMD [“sh”,”-c”,”echo $VERSION”]:输出100</p></li><li><p>docker run使用默认命令 && dockrfile中 ENV VERSION=100,CMD echo $VERSION:输出100</p></li></ul><p>小结:docker run使用默认命令中要读取容器内部的环境变量的话,一定要使用后两种方式。并且需要记住的是,使用默认命令的情况下主机的环境变量不会影响container的变量,比如在root的shell下执行export VERSION=101,对以上四个结果都不会有影响</p><h3 id="docke-run使用指定命令执行与shell环境变零的关系"><a href="#docke-run使用指定命令执行与shell环境变零的关系" class="headerlink" title="docke run使用指定命令执行与shell环境变零的关系"></a>docke run使用指定命令执行与shell环境变零的关系</h3><p>docke run使用指定命令,docker run image_name echo $VERSION:则输出本地shell的VERSION变量,这个VERSION变量跟container一点关系都没有,完全取决于当前shell的环境变量。需要注意的是,由于docker run时一般是在root权限下,所以执行<code>export VERSION=xxx</code> 时,请先执行<code>su -</code>,避免因为使用sudo改变了实际的shell导致不能输出。</p><a id="more"></a><h3 id="docker-compose中的environment和dockerfile中CMD的关系(dockerfile不考虑ENTRYPOINT,docke-compose不指定command和entrypoint)"><a href="#docker-compose中的environment和dockerfile中CMD的关系(dockerfile不考虑ENTRYPOINT,docke-compose不指定command和entrypoint)" class="headerlink" title="docker-compose中的environment和dockerfile中CMD的关系(dockerfile不考虑ENTRYPOINT,docke-compose不指定command和entrypoint)"></a>docker-compose中的environment和dockerfile中CMD的关系(dockerfile不考虑ENTRYPOINT,docke-compose不指定command和entrypoint)</h3><p>由前面可知,要在CMD使用环境变量,必须是 <code>["sh","-c","echo $VERSION"]</code> 或 <code>echo $VERSION</code> 模式,其他几个不再解释。</p><ul><li><p>docker-compose.yml不指定enviroment && dockerfile中 ENV VERSION=100, CMD [“sh”,”-c”,”echo $VERSION”]: 输出100</p></li><li><p>docker-compose.yml指定enviroment: ${VERSION:-200} && dockerfile中 ENV VERSION=100, CMD [“sh”,”-c”,”echo $VERSION”]: 输出200</p></li><li><p>docker-compose.yml指定enviroment: ${VERSION:-200} && dockerfile中 ENV VERSION=100, CMD [“sh”,”-c”,”echo $VERSION”] && su - 下执行 export VERSION=300: 输出300</p></li></ul><p>小结:在docker-compose模式下,shell环境变量优先于默认变量(此例中默认变量为200),docker-compose.yml中的environment变量优先于dockerfile中的ENV。其实docker-compose.yml中enviroment中的变量,会在docker-compose up的时候会创建或覆盖容器对应的环境变量,所以导致容器启动时它的优先级高于dockerfile中的ENV。</p><p>对于docker-compose指定command的情况与 ‘docke run使用指定命令执行与shell环境变零的关系’一致,不再举例。</p><p>善用 docker-compose config 查看解析后的yml是怎样的。</p><p>更多参考:</p><p>dockerfile中的cmd:<a href="https://docs.docker.com/engine/reference/builder/#cmd" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/#cmd</a></p><p>dockerfile的最佳实践:<a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" target="_blank" rel="noopener">https://docs.docker.com/develop/develop-images/dockerfile_best-practices/</a></p><p>docker build传入多个编译时变量:<a href="https://stackoverflow.com/questions/42297387/docker-build-with-build-arg-with-multiple-arguments" target="_blank" rel="noopener">https://stackoverflow.com/questions/42297387/docker-build-with-build-arg-with-multiple-arguments</a></p><p>docker-compose中的environment:<a href="https://docs.docker.com/compose/compose-file/#environment" target="_blank" rel="noopener">https://docs.docker.com/compose/compose-file/#environment</a></p><p>docker-compose中的变量:<a href="https://docs.docker.com/compose/compose-file/#variable-substitution" target="_blank" rel="noopener">https://docs.docker.com/compose/compose-file/#variable-substitution</a></p><p>docker-comppose中的变量的优先顺序:<a href="https://docs.docker.com/compose/environment-variables/" target="_blank" rel="noopener">https://docs.docker.com/compose/environment-variables/</a></p><p>Declare default environment variables in file:<a href="https://docs.docker.com/compose/env-file/" target="_blank" rel="noopener">https://docs.docker.com/compose/env-file/</a></p><p>ENTRYPOINT的shell模式的副作用:<a href="https://docs.docker.com/engine/reference/builder/#entrypoint" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/#entrypoint</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(4)-mysql的添加权限</title>
<link href="/2018/09/20/gitlab-ci-docker-compose(4)-mysql%E7%9A%84%E6%B7%BB%E5%8A%A0%E6%9D%83%E9%99%90.html"/>
<url>/2018/09/20/gitlab-ci-docker-compose(4)-mysql%E7%9A%84%E6%B7%BB%E5%8A%A0%E6%9D%83%E9%99%90.html</url>
<content type="html"><![CDATA[<p>目前网上添加msyql用户和权限时,很多都是使用INSERT, UPDATE, DELETE 直接操作权限表,并且总结得参差不齐。根据官方网站应该使用 CREATE USER 语句创建用户,使用 GRANT 语句添加权限。</p><ul><li>在mysql:5.7.22中需要配合 /docker-entrypoint-initdb.d/ 目录初始化数据,这时可以在该目录的sql中可以添加权限的配置。</li></ul><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"> <span class="comment">-- 不应该在这里直接删除'root'@'%',否则会影响原本root应有的权限。可以通过该命令查看该命令的影响,SHOW GRANTS FOR 'root'@'%';</span></span><br><span class="line"> <span class="comment">-- 如果不删用户root,会出现该权限 GRANT ALL PRIVILEGES ON *.* to 'root'@'%' WITH GRANT OPTION</span></span><br><span class="line"> <span class="comment">-- 如果删了用户root,会出现该权限 GRANT USAGE ON *.* to 'root'@'%'</span></span><br><span class="line"> <span class="comment">-- drop user 'root'@'%';</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">-- 密码在启动mysql容器或者在docker-compose.yml文件中时就应该指定,如</span></span><br><span class="line"> <span class="comment">-- docker run -e MYSQL_ROOT_PASSWORD=test mysql:5.7.22</span></span><br><span class="line"> <span class="comment">-- 或</span></span><br><span class="line"> <span class="comment">-- environment:</span></span><br><span class="line"> <span class="comment">-- - MYSQL_ROOT_PASSWORD=test</span></span><br><span class="line"> <span class="comment">-- CREATE USER 'root'@'%' IDENTIFIED BY 'test';</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">GRANT</span> ALL <span class="keyword">PRIVILEGES</span> <span class="keyword">ON</span> YOU_DATABASE.* <span class="keyword">TO</span> <span class="string">'root'</span>@<span class="string">'%'</span>;</span><br></pre></td></tr></table></figure><ul><li>当然建库建表前应该用if判断,不判断也行,因为本来就是新实例。</li></ul><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">DROP</span> <span class="keyword">DATABASE</span> <span class="keyword">IF</span> <span class="keyword">EXISTS</span> YOUR_DATABASE;</span><br><span class="line"><span class="keyword">CREATE</span> <span class="keyword">DATABASE</span> YOUR_DATABASE;</span><br><span class="line"></span><br><span class="line"><span class="keyword">USE</span> YOUR_DATABASE;</span><br><span class="line"></span><br><span class="line"><span class="keyword">CREATE</span> <span class="keyword">TABLE</span> <span class="string">`your_table`</span> (</span><br><span class="line"></span><br><span class="line">) <span class="keyword">ENGINE</span>=<span class="keyword">InnoDB</span> <span class="keyword">DEFAULT</span> <span class="keyword">CHARSET</span>=utf8 <span class="keyword">COMMENT</span>=<span class="string">'必要的注释'</span>;</span><br></pre></td></tr></table></figure><ul><li>查看权限</li></ul><figure class="highlight sql"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">SHOW</span> <span class="keyword">GRANTS</span> <span class="keyword">FOR</span> <span class="string">'root'</span>@<span class="string">'%'</span>;</span><br></pre></td></tr></table></figure><h3 id="docker-compose-up命令"><a href="#docker-compose-up命令" class="headerlink" title="docker-compose up命令"></a>docker-compose up命令</h3><p>运行docker-compose up时,如果以前未创建相应的镜像,则默认会创建镜像并且根据该镜像启动container;如果以前创建过镜像,则判断当前是否有对应的container,如果有则直接启动,如果没有则创建对应的container;</p><ul><li><p>–build Build images before starting containers.</p><ul><li>–build,加了这个选项后,每次运行 docker-compose up 都会构建镜像。构建镜像有另外一个专门的命令docker-compose build,可以使用–build-arg key=val 传入编译时参数,如下则为 –build-arg X=3 –build-arg Y=4。传入到docker-compose后,再传到dockerfile的 ARG 声明的同名变量中,</li></ul></li></ul><figure class="highlight yml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># docker-compose.yml</span></span><br><span class="line"><span class="attr">version:</span> <span class="string">'3'</span></span><br><span class="line"></span><br><span class="line"><span class="attr">services:</span></span><br><span class="line"><span class="attr"> app:</span></span><br><span class="line"><span class="attr"> build:</span></span><br><span class="line"><span class="attr"> context:</span> <span class="string">.</span></span><br><span class="line"><span class="attr"> dockerfile:</span> <span class="string">docker/dockerfile</span></span><br><span class="line"><span class="attr"> args:</span></span><br><span class="line"><span class="attr"> X:</span> <span class="string">${X:-1}</span> <span class="comment">#如果X不传,则X为1</span></span><br><span class="line"><span class="attr"> Y:</span> <span class="string">${Y:-2}</span> <span class="comment">#如果Y不传,则Y为2</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># dockerfile</span></span><br><span class="line"><span class="string">FROM</span> <span class="string">some-image</span></span><br><span class="line"><span class="comment"># 包含编译时默认值</span></span><br><span class="line"><span class="string">ARG</span> <span class="string">X=1</span></span><br><span class="line"><span class="string">ARG</span> <span class="string">Y=gitlabci</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 编译时不包含默认值</span></span><br><span class="line"><span class="comment">#ARG X</span></span><br><span class="line"><span class="comment">#ARG Y</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 设置容器内部环境变量</span></span><br><span class="line"><span class="string">ENV</span> <span class="string">X=$X</span></span><br><span class="line"><span class="string">ENV</span> <span class="string">Y=$Y</span></span><br></pre></td></tr></table></figure><ul><li>–force-recreate Recreate containers even if their configuration and image haven’t changed.<ul><li>–force-recreate,加了这个选项后,docker-compose会重新创建container,即使与它对应的镜像没有变化</li></ul></li><li>-V, –renew-anon-volumes Recreate anonymous volumes instead of retrievingdata from the previous containers.<ul><li>1.22.0版本有这个选项,某些低版本的不存在该选项</li><li>仅在启动的容器已经被创建的情况下有意义</li><li>从字面上来看,使用该选项运行docker-compose up 启动容器时,将会不使用复用上一个container的volume,而是重新创建对应的volume。如果不使用该选项,而是只加了–force-recreate也将仅仅会重新创建对应的容器,而不会重新mount根据对应目录而创建的volume。这个可以通过docker inpsect containter_name命令验证,两次运行docker-compose –build –force-recreate所创建的容器对应的volume的name依然相同。</li><li>这个选项的存在的意义是,如果加了改选项,第一次启动容器的时候,mount对应的目录的文件有误,想stop掉当前的container,并且在对应的目录添加了对应的文件后,第二次启动容器时能够重新mount对应的文件夹,使容器读取正确对应的文件;如果不加改选项,则使用上一个container的volume,不能读取正确的文件</li><li>当然,也可以不使用该命令,只要不是重启container的场景即可,什么意思呢,可以先 docker-compose rm -v contaner_name,下次docker-compose up的时候就是创建新容器了,当然也会重新挂载对应的volume。目前低版本的docker-comopose就是这么做的。</li><li>之前在mysql:5.7.22 版本的docker容器在使用/docker-entrypoint-initdb.d/ 目录初始化数据时,遇到类似的问题,踩了很多坑。</li></ul></li><li>总结,docker-compose up 与 docker start 命令更相似,因为它们都会复用之前的container(如果存在),而docker run是总会创建新的container。这要点需要牢记,才能避免踩坑。</li></ul><p>参考:</p><p>Mysql官网 <a href="https://dev.mysql.com/doc/refman/5.7/en/adding-users.html" target="_blank" rel="noopener">https://dev.mysql.com/doc/refman/5.7/en/adding-users.html</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(3)-mysql的初始化</title>
<link href="/2018/09/18/gitlab-ci-docker-compose(3)-mysql%E7%9A%84%E5%88%9D%E5%A7%8B%E5%8C%96.html"/>
<url>/2018/09/18/gitlab-ci-docker-compose(3)-mysql%E7%9A%84%E5%88%9D%E5%A7%8B%E5%8C%96.html</url>
<content type="html"><![CDATA[<h2 id="mysql-容器初始化:"><a href="#mysql-容器初始化:" class="headerlink" title="mysql 容器初始化:"></a>mysql 容器初始化:</h2><ul><li>根据官方文档可以使用docker启动时bind主机包含sql、sh文件的目录到容器/docker-entrypoint-initdb.d/目录,在使用mysql镜像启动容器时会自动读取该文件夹下的内容对数据库初始化。可以使用以下两种方式在命令行启动,在低版本中可能只支持-v选项,在docker 17.03.0-ce版本(不含)以上支持–mount选项。</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">--mount选项</span><br><span class="line">sudo docker run -it --name=mysql_db --mount type=bind,src=/home/zhang/Workspace/go/src/app/init_sql_script/,dst=/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=test -d mysql:5.7.22</span><br><span class="line"></span><br><span class="line">-v选项</span><br><span class="line">sudo docker run -it --name=mysql_db -v /home/zhang/Workspace/go/src/app/init_sql_script/:/docker-entrypoint-initdb.d/ -e MYSQL_ROOT_PASSWORD=test -d mysql:5.7.22</span><br></pre></td></tr></table></figure><ul><li>另外一种方法使用dockerfile的方式。直接在dockerfile中复制相应的sql,sh文件到/docker-entrypoint-initdb.d/目录下,再根据dockerfile build出对应的镜像,docker run的时候也会直接初始化对应的数据。</li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"># DockerfileMySQL</span><br><span class="line">FROM mysql:5.7.22</span><br><span class="line"></span><br><span class="line">COPY ./init_sql_script/ /docker-entrypoint-initdb.d/</span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo docker build -t app_mysql_db:init-data . </span><br><span class="line">sudo docker run --name=app_mysql_db app_mysql_db:init-data</span><br></pre></td></tr></table></figure><h2 id="docker-compose启动容器组:"><a href="#docker-compose启动容器组:" class="headerlink" title="docker-compose启动容器组:"></a>docker-compose启动容器组:</h2><a id="more"></a><ul><li>可以使用docker-compose启动容器组,先写好 docker-compose.yml后,在build和run。</li></ul><figure class="highlight yml"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># docker-compose.yml</span></span><br><span class="line"><span class="attr">version:</span> <span class="string">'3'</span></span><br><span class="line"></span><br><span class="line"><span class="attr">services:</span></span><br><span class="line"><span class="attr"> app:</span></span><br><span class="line"><span class="attr"> build:</span></span><br><span class="line"><span class="attr"> context:</span> <span class="string">.</span></span><br><span class="line"><span class="attr"> dockerfile:</span> <span class="string">DockerfileAPP</span></span><br><span class="line"><span class="attr"> image:</span> <span class="attr">app:latest</span></span><br><span class="line"><span class="attr"> ports:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">"127.0.0.1:8080:8080"</span></span><br><span class="line"><span class="attr"> depends_on:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">mysql_db</span></span><br><span class="line"><span class="attr"> command:</span> <span class="string">app</span> <span class="string">start</span></span><br><span class="line"></span><br><span class="line"><span class="attr"> mysql_db:</span></span><br><span class="line"> <span class="comment">#build:</span></span><br><span class="line"> <span class="comment"># context: .</span></span><br><span class="line"> <span class="comment"># dockerfile: DockerfileMySQL</span></span><br><span class="line"> <span class="comment">#image: mysql_db:latest</span></span><br><span class="line"><span class="attr"> image:</span> <span class="string">"mysql:5.7.22"</span></span><br><span class="line"><span class="attr"> environment:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">MYSQL_ROOT_PASSWORD=test</span></span><br><span class="line"><span class="attr"> expose:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">"3306"</span></span><br><span class="line"><span class="attr"> volume:</span></span><br><span class="line"><span class="bullet"> -</span> <span class="string">./init_sql_script/:/docker-entrypoint-initdb.d/</span></span><br></pre></td></tr></table></figure><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">sudo docker-compose build </span><br><span class="line">sudo docker-compose up --force-recreate</span><br></pre></td></tr></table></figure><h2 id="gitlab-ci-启动容器组"><a href="#gitlab-ci-启动容器组" class="headerlink" title="gitlab ci 启动容器组:"></a>gitlab ci 启动容器组:</h2><ul><li>gitlab ci 对于使用bind(–monut or -v)方式初始化数据库有还未有很好的支持。在gitlab ci中使用docker-compose中启动容器组时,docker-compose.yml中使用bind模式时不能成功初始化数据。如果docker-compose.yml使用dockerfile的方式则能够初始化数据。</li></ul><p>参考:</p><p>mysql官方docker说明 <a href="https://hub.docker.com/_/mysql/" target="_blank" rel="noopener">https://hub.docker.com/_/mysql/</a></p><p>Introduce relative entrypoints and start services after project clone and checkout<br><a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/3210" target="_blank" rel="noopener">https://gitlab.com/gitlab-org/gitlab-runner/issues/3210</a></p><p>MIGRATING TO GITLAB CI SERVICES<br><a href="https://www.mariocarrion.com/2017/10/16/gitlab-ci-services.html" target="_blank" rel="noopener">https://www.mariocarrion.com/2017/10/16/gitlab-ci-services.html</a></p><p>gitlab ci: mysql build and restore db dump<br><a href="https://stackoverflow.com/questions/44009941/gitlab-ci-mysql-build-and-restore-db-dump" target="_blank" rel="noopener">https://stackoverflow.com/questions/44009941/gitlab-ci-mysql-build-and-restore-db-dump</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>gitlab ci && docker-compose(2)-dockerfile的使用</title>
<link href="/2018/09/16/gitlab-ci-docker-compose(2)-dockerfile%E7%9A%84%E4%BD%BF%E7%94%A8.html"/>
<url>/2018/09/16/gitlab-ci-docker-compose(2)-dockerfile%E7%9A%84%E4%BD%BF%E7%94%A8.html</url>
<content type="html"><![CDATA[<h2 id="dockerfile"><a href="#dockerfile" class="headerlink" title="dockerfile"></a>dockerfile</h2><h3 id="COPY-VS-ADD"><a href="#COPY-VS-ADD" class="headerlink" title="COPY VS. ADD"></a>COPY VS. ADD</h3><ul><li>ADD支持从本地tar文件复制解压,tar文件内的根目录应该包含dockerfile,并且context也限定为tar内部,通常配合docker build 使用,如 docker build - < archive.tar.gz</li><li>ADD支持url获取,COPY不支持。当使用 docker build - < somefile 传入dockerfile时,没有context可用,只能使用ADD从一个URL获取context</li><li>ADD的最佳用途是将本地tar文件自动提取到image中,如 ADD rootfs.tar.xz / 。</li><li>它们都是based context,不能复制context之外的东西</li><li>区别:<a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy" target="_blank" rel="noopener">https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy</a></li></ul><h3 id="CMD-VS-ENTRYPOINT"><a href="#CMD-VS-ENTRYPOINT" class="headerlink" title="CMD VS. ENTRYPOINT"></a>CMD VS. ENTRYPOINT</h3><h4 id="CMD:"><a href="#CMD:" class="headerlink" title="CMD:"></a>CMD:</h4><ul><li>The CMD instruction should be used to run the software contained by your image, along with any arguments. Indeed, this form of the instruction is recommended for any service-based image. like:<ul><li>CMD [“executable”, “param1”, “param2”…]</li><li>CMD [“apache2”,”-DFOREGROUND”]</li></ul></li><li>CMD命令还用在交互式的shell中,如bash,python,perl。例如以下几个用法。 使用这种方式的好处是你能够执行如 docker run -it python 就能直接进入到一个有用的shell中。<ul><li>CMD [“perl”, “-de0”], CMD [“python”], or CMD [“php”, “-a”]</li></ul></li><li>CMD命令很少以 CMD [“param”, “param”] 的出现,这种用法是为了配合 ENTRYPOINT 使用的,所以不要轻易使用这种方式,除非你和你的目标用户清楚ENTRYPOINT是如何工作的</li></ul><h4 id="ENTRYPOINT"><a href="#ENTRYPOINT" class="headerlink" title="ENTRYPOINT:"></a>ENTRYPOINT:</h4><ul><li>ENTRYPOINT的最佳实践是,设置一个主要命令,使得运行image就像运行一个命令一样</li><li>假如有一个image定义了s3cmd命令,ENRTRYPOINT和CMD如下<ul><li>ENTRYPOINT [“s3cmd”]</li><li>CMD [“–help”]</li><li>那么如果直接运行 docker run s3cmd 的话,将会显示 s3cmd 的帮助提示</li><li>或者提供一个参数再执行命令,docker run s3cmd ls s3://mybucket,将会覆盖CMD的–help参数</li></ul></li><li>当然ENTRYPOINT也可以是一个sh脚本,可以自定义解析docker run 的时候传入的命令参数</li><li>配置容器启动后执行的命令,并且不可被 docker run 提供的参数覆盖</li></ul><a id="more"></a><h4 id="CMD和ENTRYPOINT联合使用:"><a href="#CMD和ENTRYPOINT联合使用:" class="headerlink" title="CMD和ENTRYPOINT联合使用:"></a>CMD和ENTRYPOINT联合使用:</h4><ul><li>ENTRYPOINT为exec模式时,才能够指定CMD参数和docker run时的参数;</li><li>ENTRYPOINT为shell模式时,CMD参数和docker run的参数都将失效。</li><li>ENTRYPOINT提供默认运行的命令,也可以包含默认的参数</li><li>dockerfile中的CMD提供的默认参数,并且如果在docker run 的时候传入了对应的命令(这时命令应该被理解为参数),则会覆盖CMD的默认参数,添加到ENTRYPOINT后面 </li></ul><h4 id="小结"><a href="#小结" class="headerlink" title="小结"></a>小结</h4><p>CMD</p><ul><li>一般使用exec模式</li><li>使用shell模式时,将会默认在命令亲前面加上 /bin/sh -c,如 /bin/bash -c “echo $HOME”</li><li>如果要解析主机的环境变量,则要在docker run的时候替换dockerfile中默认的命令</li><li>如果要解析container的环境变量,则要在dockerfile中使用CMD的shell模式,如executable param1 param2,或者在正常exec模式加上”sh”,”-c”,然后具体命令参数只添加到list中的一个参数中,如[“sh”, “-c”, “executable param1 param2”],</li></ul><p>ENTRYPOINTV</p><ul><li>一般使用exec模式<ul><li>只有exec模式才能与CMD联合使用</li><li>shell模式将会忽略dockerfile中的CMD和docker run时指定的参数</li></ul></li><li>如果要使用shell模式,请使用exec来启动命令,如果不加exec将会默认在命令亲前面加上 /bin/sh -c。要确保docker stop能给任何长时间运行的ENTRYPOINT可执行文件正确发出信号,需要记住用 <em>exec</em> 启动它,如 ENTRYPOINT exec executable param1 param2。</li><li>如果使用–entrypoint覆盖默认的ENTRYPOINT,则–entrypoint也必须为exec模式(将不会在命令前添加sh -c),需要注意的是entrypoint只能为一个命令或者shell脚本,不能包含任何参数,参数应该为docker run指定的参数。使用–entrypoint覆盖默认ENTRYPOINT的同时,dockerfile中的CMD也失效,CMD不再作为默认的参数</li></ul><h3 id="context"><a href="#context" class="headerlink" title="context:"></a>context:</h3><ul><li>在执行docker build PATH的时候,PATH即为context,并且PATH目录默认应该有dockerfile文件</li><li>当然也可以使用 -f 指定dockerfile,但是PATH依然是context的唯一依据</li></ul><h3 id="EXPOSE"><a href="#EXPOSE" class="headerlink" title="EXPOSE:"></a>EXPOSE:</h3><ul><li>EXPOSE指令通知Docker容器在运行时侦听指定的网络端口</li></ul><h3 id="参考"><a href="#参考" class="headerlink" title="参考:"></a>参考:</h3><p>dockfile最佳实践:<a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#cmd" target="_blank" rel="noopener">https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#cmd</a></p><p>ci的默认变量:<a href="https://docs.gitlab.com/ce/ci/variables/README.html" target="_blank" rel="noopener">https://docs.gitlab.com/ce/ci/variables/README.html</a></p><p>dockerfile引用参考:<a href="https://docs.docker.com/engine/reference/builder/" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/</a></p><p>ENTRYPOINT和CMD的联合使用:<a href="https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact</a></p><p>sudo or gosu:<a href="https://segmentfault.com/a/1190000004527476" target="_blank" rel="noopener">https://segmentfault.com/a/1190000004527476</a></p><p>What does set -e and exec “$@” do for docker entrypoint scripts?:<a href="https://stackoverflow.com/questions/39082768/what-does-set-e-and-exec-do-for-docker-entrypoint-scripts" target="_blank" rel="noopener">https://stackoverflow.com/questions/39082768/what-does-set-e-and-exec-do-for-docker-entrypoint-scripts</a></p><p>bash 内置命令exec (重要!!):<a href="https://www.cnblogs.com/gandefeng/p/7106742.html" target="_blank" rel="noopener">https://www.cnblogs.com/gandefeng/p/7106742.html</a></p><p>docker run –entrypoint如何添加参数:<a href="https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime" target="_blank" rel="noopener">https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime</a></p>]]></content>
<tags>
<tag> docker </tag>
<tag> gitlabci </tag>
</tags>
</entry>
<entry>
<title>effective go learning</title>
<link href="/2018/09/16/effective-go-learning-1.html"/>
<url>/2018/09/16/effective-go-learning-1.html</url>
<content type="html"><![CDATA[<h2 id="Formmating:"><a href="#Formmating:" class="headerlink" title="Formmating:"></a>Formmating:</h2><ul><li>gofmt:针对文件进行格式化</li><li>go fmt:针对包进行格式化</li></ul><h3 id="Indentation"><a href="#Indentation" class="headerlink" title="Indentation"></a>Indentation</h3><p>We use tabs for indentation(<strong>使用tab来缩进</strong>) and gofmt emits them by default. Use spaces only if you must.(<strong>仅在必要时使用空格</strong>)</p><h3 id="Line-length"><a href="#Line-length" class="headerlink" title="Line length"></a>Line length</h3><p>Go has no line length limit. Don’t worry about overflowing a punched card. If a line feels too long, wrap it and indent with an extra tab.</p><h3 id="Parentheses"><a href="#Parentheses" class="headerlink" title="Parentheses"></a>Parentheses</h3><p>Go needs fewer parentheses(<strong>更少的括号</strong>) than C and Java: control structures (if, for, switch) do not have parentheses in their syntax. Also, the operator precedence hierarchy is shorter and clearer</p><p>控制结构(if,for,switch)的语法中没有括号,使用严格的空格来提升直观感。</p><h2 id="Commentary:"><a href="#Commentary:" class="headerlink" title="Commentary:"></a>Commentary:</h2><ul><li>提供C模式的 /<em> </em>/的块注释,和C++模式的行注释,行注释更加普遍,而块模式在包注释以及大块注释的时候比较常用</li><li>godoc会抽取注释成文档</li><li><strong>在顶级声明之前出现的注释</strong>(没有中间换行符)将与声明一起提取,以作为项目的解释性文本。</li><li>每个包都应该有一个包注释,在package子句之前有一个块注释。对于多个文件的package,只需要在任意一个文件中声名包注释即可。包注释应该介绍包,并提供与整个包相关的信息。它将首先出现在godoc页面上。</li><li><strong>程序中的每个导出(大写)名称都应具有doc注释。并且最好以被声明的函数、字段或者其他作为开头</strong>。这样子用godoc时,容易搜索到对应的文档</li></ul><a id="more"></a><h2 id="Names:"><a href="#Names:" class="headerlink" title="Names:"></a>Names:</h2><h3 id="package-name:"><a href="#package-name:" class="headerlink" title="package name:"></a>package name:</h3><ul><li>按照惯例,软件包被赋予 <strong>小写单字</strong>名称; 应该不需要下划线或者混合使用。如果包名冲突的话,可以使用别名引用</li><li>包名是其 <strong>源目录的基本名称</strong>,包中src/encoding/base64 输入”encoding/base64”,但是有名称base64,不是encoding_base64也不是encodingBase64</li><li>使用package.New的形式定义实例函数</li></ul><h3 id="Getters:"><a href="#Getters:" class="headerlink" title="Getters:"></a>Getters:</h3><ul><li>使用与字段相同的方法(首字母大写)来命名getter</li><li>使用Set+与字段相同的方法(首字母大写)来命名setter</li></ul><h3 id="Interface-names:"><a href="#Interface-names:" class="headerlink" title="Interface names:"></a>Interface names:</h3><ul><li>一个方法接口由该方法name加上 <strong>er后缀</strong>或类似的修改命名:Reader, Writer,Formatter, CloseNotifier等。</li><li>Read,Write,Close,Flush, String等有规范签名和意义。为避免混淆,请不要将您的方法作为其中一个名称,除非它具有相同的签名和含义。</li><li>相反,如果您的类型实现的方法与众所周知类型的方法具有相同的含义,请为其指定相同的名称和签名;</li><li>使用String调用你的字符串转换方法而不是ToString,即使用String而不是ToString</li></ul><h3 id="MixedCaps:"><a href="#MixedCaps:" class="headerlink" title="MixedCaps:"></a>MixedCaps:</h3><ul><li>使用首字母大写的驼峰或者首字母小写的驼峰对多字名称进行命名</li><li><strong>而不是使用下划线</strong></li></ul><h2 id="Semicolons:"><a href="#Semicolons:" class="headerlink" title="Semicolons:"></a>Semicolons:</h2><ul><li>与C语言一样使用分号对语句进行分割,不同的是大部分工作由词法分析器完成</li><li>如果一行以 int、float64、数字或者字符串常量,或者为以下其中之一,则词法分析器会在此句末尾添加分号<ul><li>break continue fallthrough return ++ – ) }</li></ul></li><li>对于一个闭包来说,分号也可以省略</li><li><strong>在go中,一般只有for循环子句之类具有分号</strong></li><li>不能把控制结构的左括号(if,for,switch,或select)在下一行(因为放在下一行,词法分析器会自动加分号到行尾),如错误示范<ul><li>if i < f() // wrong!{ // wrong!g()}</li></ul></li></ul><h2 id="Control-structures:"><a href="#Control-structures:" class="headerlink" title="Control structures:"></a>Control structures:</h2><p>与C语言类似,不过没有do和while,只有for、if、switch、selectif:</p><ul><li><strong>if语句必须包含大括号</strong>,无论子句有多简单</li><li>由于if和switch接受初始化语句,它经常可以看到一个用于设置一个局部变量用法</li><li>由于在go中倾向于使用return来返回错误,所以在这种流程中不需要else子句</li></ul><h3 id="Redeclaration-and-reassignment:"><a href="#Redeclaration-and-reassignment:" class="headerlink" title="Redeclaration and reassignment:"></a>Redeclaration and reassignment:</h3><ul><li><p>再次声明和再次赋值需要注意三个方面</p><ul><li>再次声明是在相同的域下发生的(<strong>如果v已在外部声明中声明,这时声明将创建一个新变量§</strong>)(注释:在go中变量的作用域与传统语言比较类似,比如if语句中的变量为局部变量,而在python中,if语句中的变量不是局部变量,而是与if外部共享作用域和命名空间)</li><li><strong>初始化中的相应值可分配给v</strong></li><li>声明中至少有一个变量被新建,如果都已经被声明过,则应该使用 “=”,而不是”:=”,因为 “:=” 的第一步是重新创建变量,如果变量均已经存在,则不需要重新创建新的变量,所以不能使用 “:=”</li></ul></li></ul><h3 id="For:"><a href="#For:" class="headerlink" title="For:"></a>For:</h3><ul><li>没有while和do-while</li><li><p>三种形式</p><ul><li>for init; condition; post { } // Like a C for</li><li>for condition { } // Like a C while</li><li>for { } // Like a C for(;;) or while(true)</li></ul></li><li><strong>结合range对数组,切片,字符串,map,channel进行遍历</strong><ul><li>range能够比较好的处理,utf-8类型的字符串,能够自动解码,同时应该使用 “%q” 占位符进行输出</li></ul></li><li>go中没有逗号运算</li><li>++ 和 – 是一个声明,而不是一个表达式</li><li>注意for的post中只允许一个表达式,所以如果想要在for中使用多个变量,你应该使用 <strong>并行</strong>赋值,如<ul><li>for i, j := 0, len(a)-1; i < j; i, j = i+1, j-1 {}</li><li><del>所以也不能在post段使用 ++ 或者 –,因为post中需要一个表达式,但是这两个是声明。</del> 应该说如果在post需要给两个变量赋值时,不能使用 ++ 或者 –</li></ul></li></ul><h3 id="Switch:"><a href="#Switch:" class="headerlink" title="Switch:"></a>Switch:</h3><ul><li><p>case后面可以跟多个条件,用逗号分隔即可,如</p><ul><li>case ‘ ‘, ‘?’, ‘&’, ‘=’, ‘#’, ‘+’, ‘%’:</li></ul></li><li>与c不同的是,<strong>go的switch中的case子句并不会因为没有break就一直执行</strong>,而是只执行一个case</li><li>并且switch中的break是为了提前结束case后的代码,从而跳转到switch语句块后</li><li>如果switch外有for循环,则可以在for外增加label,break + label则在case中可以直接跳转到循环外。当然continue也可以使用label,但是只对loop有用</li></ul><h3 id="Type-switch:"><a href="#Type-switch:" class="headerlink" title="Type switch:"></a>Type switch:</h3><ul><li>switch 也可以用来发现一个<strong>接口变量的动态类型</strong>。需要配合 type 关键字使用</li><li>实际上声明了一个具有相同名称但在每种情况下具有不同类型的新变量,如</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> t <span class="keyword">interface</span>{}t = functionOfSomeType()</span><br><span class="line"><span class="keyword">switch</span> t := t.(<span class="keyword">type</span>) { </span><br><span class="line"> <span class="keyword">default</span>: fmt.Printf(<span class="string">"unexpected type %T\n"</span>, t) <span class="comment">// %T prints whatever type t has </span></span><br><span class="line"> <span class="keyword">case</span> <span class="keyword">bool</span>: fmt.Printf(<span class="string">"boolean %t\n"</span>, t) <span class="comment">// t has type bool </span></span><br><span class="line"> <span class="keyword">case</span> <span class="keyword">int</span>: fmt.Printf(<span class="string">"integer %d\n"</span>, t) <span class="comment">// t has type int </span></span><br><span class="line"> <span class="keyword">case</span> *<span class="keyword">bool</span>: fmt.Printf(<span class="string">"pointer to boolean %t\n"</span>, *t) <span class="comment">// t has type *bool </span></span><br><span class="line"> <span class="keyword">case</span> *<span class="keyword">int</span>: fmt.Printf(<span class="string">"pointer to integer %d\n"</span>, *t) <span class="comment">// t has type *int</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><ul><li>类型查询,就是根据变量,查询这个变量的类型。为什么会有这样的需求呢?goalng中有一个特殊的类型interface{},这个类型可以被任何类型的变量赋值,如果想要知道到底是哪个类型的变量赋值给了interface{}类型变量,就需要使用类型查询来解决这个需求</li></ul><h2 id="Functions:"><a href="#Functions:" class="headerlink" title="Functions:"></a>Functions:</h2><h3 id="Multiple-return-values:"><a href="#Multiple-return-values:" class="headerlink" title="Multiple return values:"></a>Multiple return values:</h3><ul><li>可以返回多个值,避免了类似C的传指针到函数中才能该数值的方式</li></ul><h3 id="Named-result-parameters:"><a href="#Named-result-parameters:" class="headerlink" title="Named result parameters:"></a>Named result parameters:</h3><ul><li>定义函数的返回值类型的时候,也可以指定变量名</li><li>指定了变量名后,当函数开始时,将会根据它们的类型进行零值初始化,这时函数可以直接return不用添加任何值,会默认返回已经声明的返回变量</li></ul><h3 id="Defer:"><a href="#Defer:" class="headerlink" title="Defer:"></a>Defer:</h3><ul><li>在解锁互斥锁和关闭文件中最常用</li><li>在函数返回前,立刻调用被声明为defer的函数</li><li>如果被defer的函数有参数,那么 <strong>参数值为defer语句执行时参数的值</strong>,不会因为参数在后面的流程中被更改而导致defer函数的参数被修改</li><li>如果有多个defer时,遵循“LIFO”后进先出的原则,入栈出栈</li></ul><h2 id="Data:"><a href="#Data:" class="headerlink" title="Data:"></a>Data:</h2><h3 id="Allocation-with-new:"><a href="#Allocation-with-new:" class="headerlink" title="Allocation with new:"></a>Allocation with new:</h3><ul><li>go有两个分配语句,new和make</li><li>new<ul><li>内建函数</li><li>分配内存,但是不像其他语言一样初始化内存,而是仅仅用零值填充它</li><li>new(T)在内存分配了一个零化的T,并返回T的指针 *T </li></ul></li><li>由于返回的内存new为零,因此需要零值的数据结构情况下,<strong>可以直接使用不用进一步的初始化</strong></li><li>零值有传递性</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line"></span><br><span class="line"><span class="keyword">type</span> SyncedBuffer <span class="keyword">struct</span> { </span><br><span class="line"> lock sync.Mutex </span><br><span class="line"> buffer bytes.Buffer</span><br><span class="line"> }</span><br><span class="line"> p := <span class="built_in">new</span>(SyncedBuffer) <span class="comment">// type *SyncedBuffer</span></span><br><span class="line"> <span class="keyword">var</span> v SyncedBuffer <span class="comment">// type SyncedBuffer</span></span><br></pre></td></tr></table></figure><h3 id="Constructors-and-composite-literals"><a href="#Constructors-and-composite-literals" class="headerlink" title="Constructors and composite literals:"></a>Constructors and composite literals:</h3><ul><li>如果不用初始化结构体的内部字段,则 new(File) 等价于 &File{}</li><li>可以指定初始化File的某些字段,如 &File{fd: fd, name: name}</li></ul><h3 id="Allocation-with-make"><a href="#Allocation-with-make" class="headerlink" title="Allocation with make:"></a>Allocation with make:</h3><ul><li>与new相比,内置函数make(T, args)有不同的用途new(T)。它仅创建slice,maps和channels,并返回类型初始化(非零值)的T(不是*T)。</li><li>有这样的区别的原因是,这三种类型的数据结构必须初始化了才能使用,比如slice必须初始化指向数组的指针、长度、容量,map和channel也是如此。</li><li>比如,make([]int,10,100),分配一个100个整数的数组,然后创建一个长度为10且容量为100的切片结构,指向数组的前10个元素。<strong>注意底层是申请cap大小的数组</strong>,而 <strong>c=new([]int) 则返回一个指向新分配slice数据结构</strong>,虽然len和cap会初始化为0,<strong>但是这个slice的指针指向nil,*c ==nil 为true。</strong></li><li>以下例子解释了new和make的区别。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> p *[]<span class="keyword">int</span> = <span class="built_in">new</span>([]<span class="keyword">int</span>) <span class="comment">// allocates slice structure; *p == nil; rarely useful</span></span><br><span class="line"><span class="keyword">var</span> v []<span class="keyword">int</span> = <span class="built_in">make</span>([]<span class="keyword">int</span>, <span class="number">100</span>) <span class="comment">// the slice v now refers to a new array of 100 ints</span></span><br><span class="line"></span><br><span class="line"><span class="comment">// Unnecessarily complex:</span></span><br><span class="line"><span class="keyword">var</span> p *[]<span class="keyword">int</span> = <span class="built_in">new</span>([]<span class="keyword">int</span>)</span><br><span class="line">*p = <span class="built_in">make</span>([]<span class="keyword">int</span>, <span class="number">100</span>, <span class="number">100</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment">// Idiomatic:</span></span><br><span class="line">v := <span class="built_in">make</span>([]<span class="keyword">int</span>, <span class="number">100</span>)</span><br></pre></td></tr></table></figure><h3 id="Arrays"><a href="#Arrays" class="headerlink" title="Arrays:"></a>Arrays:</h3><ul><li>数组在Go和C中的工作方式有很大差异。在Go中,<ul><li><strong>数组是值,不再是指针</strong>。将一个数组赋值给另一个数组会 <strong>复制</strong>所有元素。</li><li>特别是,如果将数组传递给函数,它将接收数组的副本,而不是指向它的指针。</li><li><strong>数组的大小是其类型</strong>的一部分。<strong>类型[10]int 和[20]int不同</strong></li></ul></li><li>当然可以用 & 显式地取数组的地址,如</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Sum</span><span class="params">(a *[3]<span class="keyword">float64</span>)</span> <span class="params">(sum <span class="keyword">float64</span>)</span></span> { </span><br><span class="line"> <span class="keyword">for</span> _, v := <span class="keyword">range</span> *a { </span><br><span class="line"> sum += v </span><br><span class="line"> } </span><br><span class="line"> <span class="keyword">return</span></span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line">array := [...]<span class="keyword">float64</span>{<span class="number">7.0</span>, <span class="number">8.5</span>, <span class="number">9.1</span>}</span><br><span class="line">x := Sum(&array) <span class="comment">// Note the explicit address-of operator</span></span><br></pre></td></tr></table></figure><h3 id="Slice"><a href="#Slice" class="headerlink" title="Slice:"></a>Slice:</h3><ul><li>在go中,slice使用远远比array使用广泛。slice 持有对底层数组的引用,当一个切片赋值给另一个时,将会持有相同数组的引用。</li><li>如果函数采用slice参数,则对切片元素所做的更改将对调用者可见,类似于将指针传递给基础数组。有疑问</li><li>对于slice,经常需要Append配合使用,Append实现如下。之后我们必须返回切片,因为虽然Append可以修改切片的元素,但切片本身(运行时的数据结构包含指针,长度和容量)是按值传递的。</li></ul><figure class="highlight go"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">func</span> <span class="title">Append</span><span class="params">(slice, data []<span class="keyword">byte</span>)</span> []<span class="title">byte</span></span> {</span><br><span class="line"> l := <span class="built_in">len</span>(slice)</span><br><span class="line"> <span class="keyword">if</span> l + <span class="built_in">len</span>(data) > <span class="built_in">cap</span>(slice) { <span class="comment">// reallocate</span></span><br><span class="line"> <span class="comment">// Allocate double what's needed, for future growth.</span></span><br><span class="line"> newSlice := <span class="built_in">make</span>([]<span class="keyword">byte</span>, (l+<span class="built_in">len</span>(data))*<span class="number">2</span>)</span><br><span class="line"> <span class="comment">// The copy function is predeclared and works for any slice type.</span></span><br><span class="line"> <span class="built_in">copy</span>(newSlice, slice)</span><br><span class="line"> slice = newSlice</span><br><span class="line"> }</span><br><span class="line"> slice = slice[<span class="number">0</span>:l+<span class="built_in">len</span>(data)]</span><br><span class="line"> <span class="built_in">copy</span>(slice[l:], data)</span><br><span class="line"> <span class="keyword">return</span> slice</span><br><span class="line">}</span><br></pre></td></tr></table></figure>]]></content>
<tags>
<tag> go </tag>
</tags>
</entry>
<entry>
<title>android获取ip(webtrc-app)</title>
<link href="/2018/09/16/android%E8%8E%B7%E5%8F%96ip(webtrc-app).html"/>
<url>/2018/09/16/android%E8%8E%B7%E5%8F%96ip(webtrc-app).html</url>
<content type="html"><![CDATA[<h1 id="Show-your-ip-by-webrtc"><a href="#Show-your-ip-by-webrtc" class="headerlink" title="Show your ip by webrtc"></a>Show your ip by webrtc</h1><p>代码: <a href="https://github.com/salmon7/ShowYourIP" target="_blank" rel="noopener">链接</a></p><p>实际演示效果: <a href="https://salmon7.github.io/ShowYourIP/" target="_blank" rel="noopener">demo链接</a></p><p>参考: <a href="https://github.com/diafygi/webrtc-ips" target="_blank" rel="noopener">https://github.com/diafygi/webrtc-ips</a></p><h1 id="Show-your-ip-in-app"><a href="#Show-your-ip-in-app" class="headerlink" title="Show your ip in app"></a>Show your ip in app</h1><p>使用java对应的接口查询目前的ip,包括wifi的ip和移动数据网络的ip,不一定每次能够查到所有的ip,与系统是否开启wifi、是否开启移动网络等相关。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">public</span> <span class="keyword">static</span> List<String> <span class="title">getIPAddress</span><span class="params">()</span> </span>{</span><br><span class="line"></span><br><span class="line"> ArrayList<String> iplist = <span class="keyword">new</span> ArrayList<String>();</span><br><span class="line"> <span class="keyword">try</span> {</span><br><span class="line"> <span class="comment">//Enumeration<NetworkInterface> en=NetworkInterface.getNetworkInterfaces();</span></span><br><span class="line"> <span class="keyword">for</span> (Enumeration<NetworkInterface> en = NetworkInterface.getNetworkInterfaces(); en.hasMoreElements(); ) {</span><br><span class="line"> NetworkInterface intf = en.nextElement();</span><br><span class="line"> <span class="keyword">for</span> (Enumeration<InetAddress> enumIpAddr = intf.getInetAddresses(); enumIpAddr.hasMoreElements(); ) {</span><br><span class="line"> InetAddress inetAddress = enumIpAddr.nextElement();</span><br><span class="line"> <span class="keyword">if</span> (!inetAddress.isLoopbackAddress() && inetAddress <span class="keyword">instanceof</span> Inet4Address) {</span><br><span class="line"> Log.d(MainActivity.class.getName(), <span class="string">"ip is: "</span> + inetAddress.getHostAddress());</span><br><span class="line"> iplist.add(inetAddress.getHostAddress());</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> } <span class="keyword">catch</span> (SocketException e) {</span><br><span class="line"> e.printStackTrace();</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> iplist;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>参考: <a href="https://www.cnblogs.com/anni-qianqian/p/8084656.html" target="_blank" rel="noopener">https://www.cnblogs.com/anni-qianqian/p/8084656.html</a></p>]]></content>
<tags>
<tag> android </tag>
<tag> ip </tag>
</tags>
</entry>
</search>