[MySQL5.6] 一个简单的optimizer_trace示例

前面已经介绍了如何使用和配置MySQL5.6中optimizer_trace(点击博客),本篇我们以一个相对简单的例子来跟踪optimizer_trace的产生过程。

本文的目的不是深究查询优化器的实现,只是跟踪optimizer trace在优化器的那一部分输出,因此很多部分只是一带而过,对于需要深究的部分,暂时标注为红色,后续再扩展阅读;之前一直没看过这部分代码,理解起来还是比较困难的…

我们以一个简单的表为例过一下optimizer trace的产生过程:
mysql> show create table sbtest1\G
*************************** 1. row ***************************
       Table: sbtest1
Create Table: CREATE TABLE `sbtest1` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `k` int(10) unsigned NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  `d` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `k` (`k`,`d`)
) ENGINE=InnoDB AUTO_INCREMENT=1000001 DEFAULT CHARSET=gbk MAX_ROWS=1000000
1 row in set (0.00 sec)

一条简单的SQL:
mysql> explain select * from sbtest1  where k >= 160000 and k < 320000  order by pad limit 3;
+----+-------------+---------+-------+---------------+------+---------+------+------+---------------------------------------+
| id | select_type | table   | type  | possible_keys | key  | key_len | ref  | rows | Extra                                 |
+----+-------------+---------+-------+---------------+------+---------+------+------+---------------------------------------+
|  1 | SIMPLE      | sbtest1 | range | k             | k    | 4       | NULL | 3660 | Using index condition; Using filesort |
+----+-------------+---------+-------+---------------+------+---------+------+------+---------------------------------------+
1 row in set (0.00 sec)
准备之前

在完成语法解析后,MySQL并没有对SQL中的一些逻辑进行辨别,只是将语法数简单的进行了存储。在进入正题之前,先做一些知识准备;直接进入函数    mysql_execute_command->execute_sqlcom_select->handle_select

这里执行的SQL并不包含union操作(select_lex->master_unit()->is_union()为FALSE),因此调用mysql_select 而非mysql_union:

    SELECT_LEX_UNIT *unit= &lex->unit;

    unit->set_limit(unit->global_parameters);
    /*
      ‘options’ of mysql_select will be set in JOIN, as far as JOIN for
      every PS/SP execution new, we will not need reset this flag if
      setup_tables_done_option changed for next rexecution
    */
    res= mysql_select(thd,
                      select_lex->table_list.first,
                      select_lex->with_wild, select_lex->item_list,
                      select_lex->where,
                      &select_lex->order_list,
                      &select_lex->group_list,
                      select_lex->having,
                      select_lex->options | thd->variables.option_bits |
                      setup_tables_done_option,

                      result, unit, select_lex);

##############################

初始化limit
查询的表的列表

是否包含”*”, item_list表示select的项的个数

where 条件

order by的item列表

group by的item列表

having子句

准备阶段
入口函数:mysql_select 
                    ->mysql_prepare_select 
                              -> JOIN::prepare

实际上trace对象记录在thd中,每个线程也只能看到自己的optimzier trace:
117   Opt_trace_context * const trace= &thd->opt_trace;
118   Opt_trace_object trace_wrapper(trace);
119   Opt_trace_object trace_prepare(trace, "join_preparation");
120   trace_prepare.add_select_number(select_lex->select_number);
121   Opt_trace_array trace_steps(trace, "steps");

在准备阶段主要做以下几件事情(摘自文档):

  • Initialization and linking JOIN structure to st_select_lex.

  • fix_fields() for all items (after fix_fields(), we know everything about item).

  • Moving HAVING to WHERE if possible.

  • Initialization procedure if there is one. 

在该阶段的详细流程比较复杂,暂时不展开描述。

准备阶段optimizer_trace的输出为:

      “join_preparation”: {

        “select#”: 1,
        “steps”: [
          {
            “expanded_query”: “/* select#1 */ select `sbtest1`.`id` AS `id`,`sbtest1`.`k` AS `k`,`sbtest1`.`c` AS `c`,`sbtest1`.`pad` AS `pad`,`sbtest1`.`d` AS `d` from `sbtest1` where ((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000)) order by `sbtest1`.`pad` limit 3”
          }
        ]

      }

#################################################

quoted code in JOIN::prepare

214   {
215     Opt_trace_object trace_wrapper(trace);
216     opt_trace_print_expanded_query(thd, select_lex, &trace_wrapper);
217   }

这里 * 被扩展成了具体的列;

select#”: 1 表示这个SQL中的第一个select,每个select都有一个编号来标示;如果存在多个的话,就有多个扩展开的select

优化阶段

入口函数:mysql_select->mysql_execute_select->JOIN::optimize

这是查询优化的主要部分,也是我们需要关注的重点

a.第一步,优化查询条件

从打印的信息来看可以明显的看出是对where条件的优化,调用函数为optimize_cond。不过在调用该函数之前,也有一些其他的工作需要做:

a1.对where条件中的子查询进行处理

我们知道在5.6里已经对类似select …  where id in (select…)这样的子查询做了优化,将其转化成了semi-join操作,这一步发生在函数flatten_subqueries()中;这里实际上是一个递归调用函数flatten_subqueries的步骤,因为子查询中也可能存在子查询;

在遍历子查询完成递归后,这时候每个子查询中包含的子查询都已经完成了转换(转换成semi-join或者exists,递归完成),然后对当前的子查进行转换,如果join的表的个数不超过MAX_TABLES((sizeof(table_map)*8-3) = 61),一般就可以转换成semi-join,超过限制的表只能转换成EXISTS。

那么这里有个问题是,哪些表应该优先转换成semi join呢? 在这一步,为每个子查询定义了一个优先级,发生在调用flatten_subqueries()后的开头部分:
6897     (*subq)->sj_convert_priority=
6898       (((*subq)->unit->uncacheable & UNCACHEABLE_DEPENDENT) ? MAX_TABLES : 0) +
6899       child_join->tables;

 

在进行转换前,会对sj_subselects中的元素根据sj_convert_priority来进行排序,优先级高的优先进行转换

至于如何转换的,后续需要仔细研读函数replace_subcondition 及 convert_subquery_to_semijoin

例如构造一个存在子查询的SQL:

select * from sbtest1 where id in (select id from sbtest1 where id = 100); 

截取的部分optimizer_trace输出为:
      "join_optimization": {
        "select#": 1,
        "steps": [
          {
            "transformation": {
              "select#": 2,
              "from": "IN (SELECT)",
              "to": "semijoin",
              "chosen": true,
              "evaluating_constant_semijoin_conditions": [
              ]
            }
          },
          {
            "transformations_to_nested_joins": {
              "transformations": [
                "semijoin"
              ],
              "expanded_query": "/* select#1 */ select `sbtest1`.`id` AS `id`,`sbtest1`.`k` AS `k`,`sbtest1`.`c` AS `c`,`sbtest1`.`pad` AS `pad`,`sbtest1`.`d` AS `d` from `sbtest1` semi join (`sbtest1`) where (1 and (`sbtest1`.`id` = 100) and (`sbtest1`.`id` = `sbtest1`.`id`))"
            }
          },
这部分trace还是很可读的,从IN(SELECT) 转换为semijoin,并输出了转换成semijoin后的query

a2).优化衍生表

if (select_lex->handle_derived(thd->lex, &mysql_derived_optimize))

当需要物化衍生表或视图时,例如这样的查询:
select * from (select k, id from sbtest1 where id < 50 limit 10  )t1 inner join  (select  k, id from sbtest1 where id > 10 and id < 400) t2 ;

这时候,t1, t2 是衍生表,这里会递归调用到JOIN::optimize来优化子查询,backtrace如下:
JOIN::optimize->st_select_lex::handle_derived->mysql_derived_optimize->JOIN::optimize 

a3). limit的处理

当存在select distinct , order by , group by时,row_limit = HA_POS_ERROR,  否则,row_limit为limit的记录数

当存在having子句,或者SELECT包含hint  SQL_CALC_FOUND_ROWS时,m_select_limit= HA_POS_ERROR ,否则为limit的记录数; m_select_limit用于后续决定是否可能扫描全表,

当有limit时, do_send_rows =1, 否则为0.

这几个变量后续会用到。

a4)如果是第一次优化该SELECT(select_lex->first_cond_optimization为TRUE,难道还会有多次optimize? TODO),还会:
>分配Query_arena(为Prepared statement或者Stored procedure);
>如果可以,将outer join 转换为inner join
207     if (simplify_joins(this, join_list, conds, true, false, &conds))

例如,对于如下表t1,t2,这两个表结构相同:
mysql> show create table t1\G
*************************** 1. row ***************************
       Table: t1
Create Table: CREATE TABLE `t1` (
  `a` int(11) NOT NULL AUTO_INCREMENT,
  `b` int(11) DEFAULT NULL,
  `c` int(11) DEFAULT NULL,
  PRIMARY KEY (`a`)
) ENGINE=InnoDB AUTO_INCREMENT=8179 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

执行:
select * from t1 left join t2 on t1.a = t2.a where t2.b < 1000;

那么在optimizer_trace中会打印:
          {
            "transformations_to_nested_joins": {
              "transformations": [
                "outer_join_to_inner_join",
                "JOIN_condition_to_WHERE",
                "parenthesis_removal"
              ],
              "expanded_query": "/* select#1 */ select `t1`.`a` AS `a`,`t1`.`b` AS `b`,`t1`.`c` AS `c`,`t2`.`a` AS `a`,`t2`.`b` AS `b`,`t2`.`c` AS `c` from `t1` join `t2` where ((`t2`.`b` < 1000) and (`t1`.`a` = `t2`.`a`))"
            }
          },
类似的转换:
SELECT * from t1 LEFT JOIN (t2, t3) ON t2.a=t1.a t3.b=t1.b WHERE t2.c < 5
转换为SELECT * FROM t1, t2, t3 WHERE t2.c < 5 AND t2.a=t1.a t3.b=t1.b

       SELECT * FROM t1 LEFT JOIN t2 ON t2.a=t1.a

LEFT JOIN t3 ON t3.b=t2.b

WHERE t3 IS NOT NULL

转换为:

SELECT * FROM t1 LEFT JOIN t2 ON t2.a=t1.a, t3

WHERE t3 IS NOT NULL AND t3.b=t2.b =>

SELECT * FROM t1, t2, t3

WHERE t3 IS NOT NULL AND t3.b=t2.b AND t2.a=t1.a

在完成simplify_joins函数之后,还要将需要进行nest join的表对应的Bit在nested_join_map中设置(函数build_bitmap_for_nested_joins)

a5)

在完成上述步骤后,才进入查询条件优化,也是本例中打印出来的第一步:
 234   conds= optimize_cond(thd, conds, &cond_equal,
 235                        join_list, true, &select_lex->cond_value);

      “join_optimization”: {

“select#”: 1,

“steps”: [

{

condition_processing“: {

“condition”: “WHERE”,

“original_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”,

“steps”: [

{

“transformation”: “equality_propagation“,

“resulting_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”

},

{

“transformation”: “constant_propagation“,

“resulting_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”

},

{

“transformation”: “trivial_condition_removal“,

“resulting_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”

}

]

}

},

###########################################

关于where条件的优化,包括几个部分:等价传递(equality_propagation):

例如,对于这样的查询条件:

….where a = 12 and b = c  and a = b;

会被转换为:

(multiple equal(12, `t1`.`a`, `t1`.`b`, `t1`.`c`))

….where b > a and a = 2

会被转换为:

((`t1`.`b` > 2) and multiple equal(2, `t1`.`a`))

常量传递(constant_propagation)

根据注释,如果可以的话,将filed=filed转换为field=const

暂时没有构造出样例,大部分常数传递在第一步就完成了…

TODO:阅读函数propagate_cond_constants

一个样例(suite/opt_trace/r/general_no_prot_all.result)
剪除无效条件(trivial_condition_removal)

例如 ….where 1 = 0 or  a > 500

在这一步,会被转换为:

(`t1`.`a` > 500)

a6)
优化having字句,和where条件一样调用相同的接口函数:
245     having= optimize_cond(thd, having, &cond_equal, join_list, false,
246                           &select_lex->having_value);

因此这一步可能产生和上面类似的trace输出

a7)优化count(*), min()和max()
暂不展开

TODO: 阅读函数opt_sum_query()

a8) 如果需要的话,构建临时表(当前join的tables_list为空)
函数:
371     if (make_tmp_tables_info())
 

a9)检查是否有一个表在order by 中,并且和group是相容的,如果是,返回TABLE对象,否则返回0

376   sort_by_table= get_sort_by_table(order, group_list, select_lex->leaf_tables); 

这之后进入一个重要的函数:make_join_statistics,也是下面要讨论的。

b.
进入函数:make_join_statistics

b1)

          {

“table_dependencies”: [

{

“table”: “`sbtest1`”,

“row_may_be_null”: false,

“map_bit”: 0,

“depends_on_map_bits”: [

]

}

]

},

#################

表示join操作中依赖的表;

这里只有一个sbtest表,因此map_bit为0,map_bit可以认为是join的表的编号,从0开始打印该信息的函数:

make_join_statistics->trace_table_dependencies

b2)

          {

“ref_optimizer_key_uses”: [

]

},

################

当使用二级索引的ref查询,存在索引上的等值表达式,这里会打印可用的key,如下例所示:

create table t4 (a int , b int ,c int, d int,  key (a), key(b,c));

// insert some rows…

select * from t4 where b =577 and c = 25 and d > 1000;

那么在ref_optimizer_key_uses这一栏就会打印:

{

“table”: “`t4`”,

“field”: “b”,

“equals”: “577”,

“null_rejecting”: false

},

{

“table”: “`t4`”,

“field”: “c”,

“equals”: “25”,

“null_rejecting”: false

}再例如:

select * from t4 where a = 1000 and b > 577 and c = 1000;

则打印的信息为:

{

                “table”: “`t4`”,
                “field”: “a”,
                “equals”: “1000”,
                “null_rejecting”: false

              }

显然这条SQL使用ref类型的索引,只能使用索引键值a

如果where条件修改成 a = 1000 and b = 577 and c > 1000,就会打印:

{

                “table”: “`t4`”,
                “field”: “a”,
                “equals”: “1000”,
                “null_rejecting”: false
              },
              {
                “table”: “`t4`”,
                “field”: “b”,
                “equals”: “577”,
                “null_rejecting”: false

              }

这里的示例SQL并没有使用到索引的等值条件,所以这里为空

如下函数会被调用到

make_join_statistics

->update_ref_and_keys

-> print_keyuse_array

update_ref_and_keys是这部分逻辑的主要函数,它会根据条件,找出可以利用索引的key值,存在keyuse数组中;另外也会根据条件设置表的possible key

 

b3)

3302     if (pull_out_semijoin_tables(join))
 

b4)常量表处理,只包含一行活0行数据的表被认为是常量表(const table)

TODO(3305~3570)

b5)

        {

“rows_estimation”: [

{

“table”: “`sbtest1`”,

                “range_analysis“: {

table_scan“: {

“rows”: 986190,

“cost”: 211174

},

                  “potential_range_indices”: [

{

“index”: “PRIMARY”,

“usable”: false,

“cause”: “not_applicable”

},

{

“index”: “k”,

“usable”: true,

“key_parts”: [

“k”,

“d”,

“id”

]

}

],

                  “setup_range_conditions“: [

],

                  “group_index_range”: {

“chosen”: false,

“cause”: “not_group_by_or_distinct”

},

“analyzing_range_alternatives”: {

“range_scan_alternatives”: [

{

“index”: “k”,

“ranges”: [

“160000 <= k < 320000”

],

“index_dives_for_eq_ranges”: true,

“rowid_ordered”: false,

“using_mrr”: false,

“index_only”: false,

“rows”: 3660,

“cost”: 4393,

“chosen”: true

}

],

                    “analyzing_roworder_intersect”: {

“usable”: false,

“cause”: “too_few_roworder_scans”

}

},

“chosen_range_access_summary”: {

“range_access_plan”: {

“type”: “range_scan”,

“index”: “k”,

“rows”: 3660,

“ranges”: [

“160000 <= k < 320000”

]

},

“rows_for_plan”: 3660,

“cost_for_plan”: 4393,

“chosen”: true

}

}

}

]

},

###################################################rows_estimation:开始估算需要扫描的记录数

这里会针对JOIN中每个涉及到的表进行计算,示例中只有sbtest1,因此只对

sbtest1表及其上的索引;

a.

计算该表上的worst seek:

3623       s->worst_seeks= min((double) s->found_records / 10,

3624                           (double) s->read_time * 3);

其中s->found_records表示表上的记录数,s->read_time在innodb层表示聚集索引page数

b.检查是否有索引可以用于GROUP BY 或者 DISTINCT查询

add_group_and_distinct_keys(join, s);

如果查询有group by,找到所有包含全部group by列的索引,并将这些索引加入到join_tab->const_keys和join_tab->keys中

如果查询有distinct,同样找出包含全部select列的索引,并加入到join_tab->const_keys和join_tab->keys中

这里可能产生trace输出,例如如下查询(b 为索引列):

select * from t1 where a < 1000 group by b;

“table”: “`t1`”,

                “const_keys_added”: {
                  “keys”: [
                    “b”
                  ],
                  “cause”: “group_by”
                },

                “range_analysis”: {

c.下面进入range analysis阶段,这也是本示例中的主要部分,因为我们的示例条件就是一个简单的range查询;

需要满足如下条件才会进行range analysis:

Perform range analysis if there are keys it could use (1)

Don’t do range analysis if on the inner side of an outer join (2).

Do range analysis if on the inner side of a semi-join (3)

3639       TABLE_LIST *const tl= s->table->pos_in_table_list;

3640       if (!s->const_keys.is_clear_all() &&                        // (1)
3641           (!tl->embedding ||                                      // (2)

3642            (tl->embedding && tl->embedding->sj_on_expr)))         // (3)

满足上述条件后,会调用如下函数开始估算扫描的行数:

3652         records= get_quick_record_count(thd, select, s->table,

3653                                         &s->const_keys, join->row_limit);

make_join_statistics

->get_quick_record_count

->SQL_SELECT::test_quick_select 

c1) table_scan 先计算全表扫描的开销

这里记录数 records = 986190 (近似值)

scan_time= records * ROW_EVALUATE_COST + 1 =  986190 *0.2 +1 = 197239

head->file->scan_time() = 13934

read_time= head->file->scan_time() + scan_time + 1.1 =   13934 + 197239 + 1.1  = 211174.1

read_time即为全表扫描的开销

当使用了force index时,scan_time= read_time= DBL_MAX

当join->row_limit小于估算的表上记录数时,重新计算read_time = (double) records + scan_time + 1

这里由于存在order by , 因此join->row_limit被设置为一个非常大的值,表示limit 3无效,随后输出:

range_analysis“: {

table_scan“: {

“rows”: 986190,

“cost”: 211174

},
c2)将所有潜在的可用RANGE优化的索引信息记录到key_parts数组中(标示为potential_range_indices

这里会打印出所有的索引信息;

本示例中,只有两个索引:聚集索引和key(k,d),由于在聚集索引上没有过滤条件,因此这里聚集索引的usable标记为false。

可用的索引(keys_to_use.is_set(idx)为TRUE)信息被拷贝到数组key_parts中,使用到的key可能受到

use_index_extensions(默认打开)的影响,因此这里的二级索引key_parts中包含了聚集索引列
c3)如果存在覆盖索引的话,计算覆盖索引开销(TODO:深入)

本示例中并无覆盖索引,我们建一个简单的表:

CREATE TABLE `t4` (

  `a` int(11) DEFAULT NULL,
  `b` int(11) DEFAULT NULL,
  `c` int(11) DEFAULT NULL,
  `d` int(11) DEFAULT NULL,
  KEY `a` (`a`),
  KEY `b` (`b`,`c`)

) ENGINE=InnoDB

执行查询:

select b,c from t4 where  b = 15;

这是一个很简单的查询,显然覆盖索引KEY `b` (`b`,`c`)可以被用上,因此在optimizer trace中会打印:

best_covering_index_scan“: {

                    “index”: “b”,
                    “cost”: 13269,
                    “chosen”: true

                  },

首先找到Key值长度最短的可用覆盖索引:

int key_for_use= find_shortest_key(head, &head->covering_keys);

因为键值长度越小,理论上IO越小

计算覆盖索引开销:

double key_read_time=

        param.table->file->index_only_read_time(key_for_use,
                                                rows2double(records)) +

        records * ROW_EVALUATE_COST;

如果key_read_time小于read_time,即比全表扫描的cost要小,则选择该覆盖索引,设置read_time=key_read_time

index_only_read_time的计算过程:

一个半满page上的键值数:

uint keys_per_block= (stats.block_size/2/

                        (table_share->key_info[keynr].key_length + ref_length) +
                        1);

  read_time=((double) (records + keys_per_block-1) /

             (double) keys_per_block);

c4) setup_range_conditions

可能输出impossible_range/range_scan_possible, TODO

这里的操作,猜测应该是
c5)group_index_range (TODO)

代码段:

2808     group_trp= get_best_group_min_max(&param, tree, best_read_time);

本示例没有包含group by,因此显示chosen 为false,因为not_group_by_or_distinct

我们举一个简单的例子:

select * from t4 where a > 100000 group by b ;

相关输出为:

“group_index_range”: {

                    “potential_group_range_indices”: [
                      {
                        “index”: “b”,
                        “usable”: false,
                        “cause”: “not_covering”
                      },
                      {
                        “index”: “a”,
                        “usable”: false,
                        “cause”: “not_covering”
                      }
                    ]
                  },

如果存在满足的index,还可能输出best_group_range_summary,TODO
c6)analyzing_range_alternatives

分析可选的range查询开销; 示例的查询比较简单,只在一个索引k上有range,因此只有一个选择:

“range_scan_alternatives”: [

                      {
                        “index”: “k”,
                        “ranges”: [
                          “160000 <= k < 320000”
                        ],
                        “index_dives_for_eq_ranges”: true,
                        “rowid_ordered”: false,
                        “using_mrr”: false,
                        “index_only”: false,
                        “rows”: 3660,
                        “cost”: 4393,
                        “chosen”: true
                      }

                    ],
我们举一个稍微复杂点的例子,同样用上面提到的t4表,执行如下SQL:

select * from t4 where a > 100 and b > 100 and b < 500;

这里有两个range, 因此index(a) 和 index(b,c)都可能用上,optimizer_trace输出为:

“range_scan_alternatives”: [

                      {
                        “index”: “b”,
                        “ranges”: [
                          “100 < b < 500”
                        ],
                        “index_dives_for_eq_ranges”: true,
                        “rowid_ordered”: false,
                        “using_mrr”: false,
                        “index_only”: false,
                        “rows”: 2606,
                        “cost”: 3128.2,
                        “chosen”: true
                      },
                      {
                        “index”: “a”,
                        “ranges”: [
                          “100 < a”
                        ],
                        “index_dives_for_eq_ranges”: true,
                        “rowid_ordered”: false,
                        “using_mrr”: false,
                        “index_only”: false,
                        “rows”: 32850,
                        “cost”: 39421,
                        “chosen”: false,
                        “cause”: “cost”
                      }

                    ],

对应函数:

2847         /* Get best ‘range’ plan and prepare data for making other plans */

 2848         if ((range_trp= get_key_scans_params(&param, tree, FALSE, TRUE,
 2849                                              best_read_time)))
 2850         {
 2851           best_trp= range_trp;
 2852           best_read_time= best_trp->read_cost;

 2853         }

函数get_key_scans_params中会去计算可用于range查询的索引的开销,

5546       found_records= check_quick_select(param, idx, read_index_only, *key,

5547                                         update_tbl_stats, &mrr_flags,

5548                                         &buf_size, &cost);

c7)analyzing_roworder_intersect

优化ROR //Get best non-covering ROR-intersection plan

对应函数

2869           if ((rori_trp= get_best_ror_intersect(&param, tree, best_read_time)))

 2870           {
 2871             best_trp= rori_trp;
 2872             best_read_time= best_trp->read_cost;

 2873           }

本示例由于”too_few_roworder_scans“ 所以chosen为false
c8) 如果存在index merge的话,还会计算index merge的开销;

sql/opt_range.cc:2878~2910

optmizer trace 可能的输出analyzing_index_merge

c9) chosen_range_access_summary

在经过上述步骤后,将最优的TABLE_READ_PLAN打印出来

 

b6)
3737   if (!join->plan_is_const())
3738     optimize_keyuse(join, keyuse_array);
3739
3740   join->allow_outer_refs= true;
3741
3742   if (optimize_semijoin_nests_for_materialization(join))
3743     DBUG_RETURN(true);

 

b7) 

          {

“considered_execution_plans”: [

{

“plan_prefix”: [

],

“table”: “`sbtest1`”,

“best_access_path”: {

“considered_access_paths”: [

{

“access_type”: “range”,

“rows”: 3660,

“cost”: 5125,

“chosen”: true

}

]

},

“cost_for_plan”: 5125,

“rows_for_plan”: 3660,

“chosen”: true

}

]

},

###########################

进入选择多表join的顺序:

3745   if (Optimize_table_order(thd, join, NULL).choose_table_order())

3746     DBUG_RETURN(true);Optimize_table_order::choose_table_order

该函数决定了多表join的顺序,使用贪婪算法搜索最优查询计划,在5.5及之前对应的函数是choose_plan

首先根据表的记录数排序,记录数少的在前面(如果没有STRAIGHT JOIN)

对于straight join 调用optimize_straight_join(join_tables)生成查询计划

否则,调用greedy_search(join_tables)进行深度优先遍历,找出最优执行计划

示例中只有一个表sbtest1,唯一选择,我们构造一个两个表join的场景来看看;

select * from t1,t2  where t1.a < 5000 and t2.b = 4000;

打印的对应trace为:

{

            “considered_execution_plans”: [
              {
                “plan_prefix”: [
                ],
                “table”: “`t2`”,
                “best_access_path”: {
                  “considered_access_paths”: [
                    {
                      “access_type”: “ref”,
                      “index”: “b”,
                      “rows”: 1,
                      “cost”: 1.2,
                      “chosen”: true
                    },
                    {
                      “access_type”: “range”,
                      “cause”: “heuristic_index_cheaper”,
                      “chosen”: false
                    }
                  ]
                },
                “cost_for_plan”: 1.2,
                “rows_for_plan”: 1,
                “rest_of_plan”: [
                  {
                    “plan_prefix”: [
                      “`t2`”
                    ],
                    “table”: “`t1`”,
                    “best_access_path”: {
                      “considered_access_paths”: [
                        {
                          “access_type”: “range”,
                          “rows”: 2963,
                          “cost”: 1189.3,
                          “chosen”: true
                        }
                      ]
                    },
                    “cost_for_plan”: 1190.5,
                    “rows_for_plan”: 2963,
                    “chosen”: true
                  }

                ]

},

              {
                “plan_prefix”: [
                ],
                “table”: “`t1`”,
                “best_access_path”: {
                  “considered_access_paths”: [
                    {
                      “access_type”: “range”,
                      “rows”: 2963,
                      “cost”: 1189.3,
                      “chosen”: true
                    }
                  ]
                },
                “cost_for_plan”: 1189.3,
                “rows_for_plan”: 2963,
                “pruned_by_heuristic”: true
              }

            ]

 

b8)决定子查询使用哪种策略:转换为EXISTS还是materialization

3752   if (join->decide_subquery_strategy())
3753     DBUG_RETURN(true);
3754
3755   join->refine_best_rowcount();
这里也会有trace输出;
trace关键字
execution_plan_for_potential_materialization/surely_same_plan_as_EXISTS/subq_mat_decision/subq_attached_to_const_table/subq_attached_to_table/subq_attached_to_join_result

b9)
3767   if (thd->lex->is_single_level_stmt())
3768     thd->status_var.last_query_cost= join->best_read;
3769
3770   /* Generate an execution plan from the found optimal join order. */
3771   if (join->get_best_combination())
3772     DBUG_RETURN(true);
设置线程session遍历last_query_cost,并依据之前确定的Join顺序生成执行计划

c.我们回到函数JOIN::optimize继续

          {

attaching_conditions_to_tables“: {

“original_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”,

attached_conditions_computation“: [

{

“table”: “`sbtest1`”,

“rechecking_index_usage”: {

“recheck_reason”: “low_limit”,

“limit”: 3,

“row_estimate”: 3660

}

}

],

attached_conditions_summary“: [

{

“table”: “`sbtest1`”,

“attached”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”

}

]

}

},

###########################################

调用函数:

505   if (make_join_select(this, conds))

根据注释,make_join_select主要做几件事儿:

Step #1: Extract constant condition

        – Extract and check the constant part of the WHERE
        – Extract constant parts of ON expressions from outer

          joins and attach them appropriately. 

trace输出关键字:condition_on_constant_tables
Step #2: Extract WHERE/ON parts

Heuristic: Switch from ‘ref’ to ‘range’ access if ‘range’

         access can utilize more keyparts than ‘ref’ access. Conditions

         for doing switching:

trace输出关键字:

attaching_conditions_to_tables/attached_conditions_computation


attached_conditions_summary这一栏在函数最后打印上述的最终结果

 

e.

          {

“clause_processing”: {

“clause”: “ORDER BY”,

“original_clause”: “`sbtest1`.`pad`”,

“items”: [

{

“item”: “`sbtest1`.`pad`”

}

],

“resulting_clause_is_simple”: true,

“resulting_clause”: “`sbtest1`.`pad`”

}

},

#############################

d.尝试优化distinct/group by/ order by等
sql/sql_optimizer.cc: 514~695 

517     order= ORDER_with_src(remove_const(this, order, conds, 1, &simple_order, “ORDER BY”), order.src);;

从trace输出看,这里未做任何改变;

group by 也会输出类似的信息,例如,把示例的ORDER BY 改成GROUP BY:
select * from sbtest.sbtest1  where k >= 160000 and k < 320000  group by pad limit 3;

          {

            “clause_processing”: {
              “clause”: “GROUP BY”,
              “original_clause”: “`sbtest`.`sbtest1`.`pad`”,
              “items”: [
                {
                  “item”: “`sbtest`.`sbtest1`.`pad`”
                }
              ],
              “resulting_clause_is_simple”: true,
              “resulting_clause”: “`sbtest`.`sbtest1`.`pad`”
            }
          },

f.

          {

“refine_plan”: [

{

“table”: “`sbtest1`”,

“pushed_index_condition”: “((`sbtest1`.`k` >= 160000) and (`sbtest1`.`k` < 320000))”,

“table_condition_attached”: null,

“access_type”: “range”

}

]

}

########################################

调用函数:

725   if (make_join_readinfo(this, select_opts_for_readinfo, no_jbuf_after))

726     DBUG_RETURN(1);在make_join_readinfo中打印refine_plan部分输出

这里也可以认为是最终的查询计划输出了。

g.检查是否需要创建临时表
sql/sql_optimizer.cc:   728~744

h,在函数JOIN::optimize中剩下的代码
这后面还有对order by/ limit /group by等的处理,可能还会有别的optimizer trace输出….暂时搁置,等遇到了再看吧…头晕了..
例如,判断是否能够根据索引把order by给省略掉(test_if_skip_sort_order)。。。。

执行阶段

入口函数:JOIN::exec

    {

“join_execution”: {

“select#”: 1,

“steps”: [

{

“filesort_information”: [

{

“direction”: “asc”,

“table”: “`sbtest1`”,

“field”: “pad”

}

],

“filesort_priority_queue_optimization”: {

“limit”: 3,

“rows_estimate”: 13386691,

“row_size”: 495,

“memory_available”: 32768,

              “chosen”: true

            },

“filesort_execution”: [

],

“filesort_summary”: {

              “rows”: 4,

“examined_rows”: 3662,

              “number_of_tmp_files”: 0,

“sort_buffer_size”: 2012,

“sort_mode”: “<sort_key, additional_fields>”

}

}

]

}

}

######################################

打印执行filesort时的相关信息filesort_information 表示排序的基本信息
filesort_priority_queue_optimization表示是否可以使用一个priority queue

例如对于类似这样的查询:

SELECT … FROM t ORDER BY a1,…,an LIMIT max_rows;

函数check_if_pq_applicable会去检查是否能使用优先队列来维持结果,需要满足如下条件:

–比归并排序的开销更小

–有足够的内存来存储max_rows条记录

memory_available就是sort_buffer_size的大小
TODO:这里看起来是5.6的一个优化,5.5没有对应的函数,后面再细看下

参考文档:

 

原创文章,转载请注明: 转载自Simple Life

本文链接地址: [MySQL5.6] 一个简单的optimizer_trace示例

Post Footer automatically generated by wp-posturl plugin for wordpress.


Comments

Leave a Reply

Your email address will not be published. Name and email are required


Current month ye@r day *