install oracle Instant Client 12.1.0.2 on Mac

首先从oracle官网上下载软件basic和sqlplus两个包。

创建oracle client使用的目录并将下载的两个包内的文件分发到对应的目录下。

mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/bin
mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/lib
mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/jdbc/lib
mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/rdbms/jlib
mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/sqlplus/admin
mkdir -p /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/network/admin

cd instantclient_12_1 & cd instantclient_12_1-2

mv ojdbc* /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/lib/
mv x*.jar /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/rdbms/jlib/
mv glogin.sql /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/sqlplus/admin/
mv *dylib* /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/lib/
mv *README /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/
mv * /Users/leo/oracle/product/instantclient_64/12.1.0.1.0/bin/

增加tnsnames.ora文件
PDB1_122 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.20)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pdb1)
    )
  )

.bash_profile下添加环境变量
export ORACLE_HOME=/Users/leo/oracle/product/instantclient_64/12.1.0.1.0
export PATH=$ORACLE_HOME/bin:$PATH
export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib

client安装完毕

发表在 Uncategorized | 留下评论

Oracle analytic function-sum/max/min/avg/count

SUM分析函数的主要功能:计算组内表达式的累积和。
MIN分析函数的主要功能: 在一个组中的数据窗口中查找表达式的最小值。
MAX分析函数的主要功能:在一个组中的数据窗口中查找表达式的最大值。
AVG分析函数的主要功能:用于计算一个组和数据窗口内表达式的平均值。
COUNT分析函数的主要功能:对一组内发生的事情进行累积计数,如果指定*或一些非空常数,COUNT将对所有行计数,如果指定一个表达式,COUNT返回表达式非空赋值的计数,当有相同值出现时,这些相等的值都会被纳入被计算的值;可以使用 DISTINCT 来记录去掉一组中完全相同的数据后出现的行数。

语法:
SUM([ DISTINCT | ALL ] expr) [ OVER (analytic_clause) ]
MIN([ DISTINCT | ALL ] expr) [ OVER (analytic_clause) ]
MAX([ DISTINCT | ALL ] expr) [ OVER (analytic_clause) ]
AVG([ DISTINCT | ALL ] expr) [ OVER(analytic_clause) ]
COUNT({ * | [ DISTINCT | ALL ] expr }) [ OVER (analytic_clause) ]

实验演示

SELECT department_id,
       first_name || ' ' || last_name employee_name,
       hire_date,
       salary,
       SUM(salary) OVER(PARTITION BY department_id order by hire_date) AS sal_dept_sum, --相同department_id下员工的薪水累积和
       MIN(salary) OVER(PARTITION BY department_id order by hire_date) AS sal_dept_min, --相同department_id下员工薪水的最小值
       MAX(salary) OVER(PARTITION BY department_id order by hire_date) AS sal_dept_max, --相同department_id下员工薪水的最大值
       AVG(salary) OVER(PARTITION BY department_id order by hire_date) AS sal_dept_avg, --相同department_id下员工薪水的平均值
       COUNT(*) OVER(ORDER BY salary) AS count_by_salary, --按照薪水值排序累积计数
       COUNT(*) OVER(ORDER BY salary RANGE BETWEEN 100 PRECEDING AND 400 FOLLOWING) AS sal_moving_count --薪水排序中当前行位置薪水范围在
                                                                                                    --[当前行薪水值-100,当前行薪水值+400]之间的行数.
  FROM employees
 where department_id <= 40; 

sal_moving_count的含义是这样的:例如员工Karen Colmenares,他的薪水是2500,那么按照语句求得的范围是[2500-100,2500+400],即[2400-2900]之间。 这个员工所在的行之前(=>2400)是没有记录符合这个条件的,之后(<=2900)符合条件的为3行,算上本身的话,那么sal_moving_count是4,其他的类似。

发表在 SQL | 留下评论

Install hadoop and spark on macOS

hadoop版本 2.7.2
spark版本为spark-2.1.0-bin-hadoop2.7
python版本为macOS自带的2.7版本

1. 安装java

$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
$ /usr/libexec/java_home
/Library/Java/JavaVirtualMachines/jdk1.8.0_131.jdk/Contents/Home

增加java的环境变量到.bash_profile中
if which java > /dev/null; then export JAVA_HOME=$(/usr/libexec/java_home); fi

2. 开启ssh

按照这个路径进行设置System Preferences -> Sharing -> and check “Remote Login”.
$ ssh-keygen -t rsa -P ”
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
检查ssh到有效性
$ ssh localhost

3.下载hadoop 2.7.2版本

curl -O http://apache.fayea.com/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

4. 创建hadoop目录

$ cd ~/Downloads
$ tar xzvf hadoop-2.7.2.tar
$ mv hadoop-2.7.2 /usr/local/hadoop

5. 配置hadoop的配置文件

$ cd /usr/local/hadoop/etc/hadoop/

1

6. 启动hadoop服务

Format HDFS
$ cd /usr/local/hadoop
$ bin/hdfs namenode -format

Start HDFS
$ sbin/start-dfs.sh

Start YARN
$ sbin/start-yarn.sh

Check HDFS file Directory
$ bin/hdfs dfs -ls /

在hdfs上增加mapreduce job目录
$ hdfs dfs -mkdir /user
$ hdfs dfs -mkdir /user/leo

7. 安装spark

修改spark目录conf下的spark-env.sh文件,增加如下内容

export SPARK_LOCAL_IP=127.0.0.1
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME

8. 配置环境变量

配置后的环境变量内容为

$ more .bash_profile
if which java > /dev/null; then export JAVA_HOME=$(/usr/libexec/java_home); fi

export SPARK_HOME=/Users/leo/spark-2.1.0-bin-hadoop2.7
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH
export PATH=$PATH:/usr/local/hadoop/bin
export HADOOP_HOME=/usr/local/hadoop/
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/:$LD_LIBRARY_PATH

9. 配置完成

启动spark-shell和pyspart,信息如下

$ pyspark
Python 2.7.10 (default, Jul 30 2016, 19:40:32)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/05/03 21:22:38 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/03 21:22:42 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0
/_/

Using Python version 2.7.10 (default, Jul 30 2016 19:40:32)
SparkSession available as 'spark'.
>>>
$ spark-shell
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/05/03 21:23:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/03 21:24:02 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://127.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1493817838677).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0
/_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.
scala>

错误信息记录

在配置过程中启动scala shell的过程中会报错,可以通过http://localhost:8088来查看错误,此次安装配置过程中遇到因为java_home的问题会抛出一些奇怪的报错,通过使用8088的页面查看log信息,发现有/bin/bash: /bin/java: No such file or directory的错误。查看到https://issues.apache.org/jira/browse/HADOOP-8717关于这个错误的解决办法
修改文件HADOOP_HOME/libexec/hadoop-config.sh

if [ -x /usr/libexec/java_home ]; then
export JAVA_HOME=($(/usr/libexec/java_home))
else
export JAVA_HOME=(/Library/Java/Home)
fi
修改后如下
if [ -x /usr/libexec/java_home ]; then
export JAVA_HOME=$(/usr/libexec/java_home)
else
export JAVA_HOME=/Library/Java/Home
fi
发表在 hadoop | 留下评论

Oracle analytic function-LEAD

有LAG函数获取上个offset记录的函数,就会有获取下一个OFFSET的记录的函数,这个函数就是LEAD函数,这个函数基本上是LAG函数的逆向。

语法(11gR2)
LEAD
{ ( value_expr [, offset [, default]] ) [ { RESPECT | IGNORE } NULLS ]
| ( value_expr [ { RESPECT | IGNORE } NULLS ] [, offset [, default]] )
}
OVER ([ query_partition_clause ] order_by_clause)

本文的原始数据与Oracle analytic function-LAG中使用的数据相同。

原始数据

SQL> select employee_id,hire_date,salary from employees where department_id <= 40 order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY
----------- ------------------- ----------
        203 2002-06-07 00:00:00       6500
        114 2002-12-07 00:00:00      11000
        115 2003-05-18 00:00:00       3100
        200 2003-09-17 00:00:00       4400
        201 2004-02-17 00:00:00      13000
        117 2005-07-24 00:00:00       2800
        202 2005-08-17 00:00:00       6000
        116 2005-12-24 00:00:00       2900
        118 2006-11-15 00:00:00       2600
        119 2007-08-10 00:00:00       2500

offset为1,默认值用0填充。

SQL> select employee_id,
  2         hire_date,
  3         salary,
  4         lead(salary,1,0) over(order by hire_date) as salary_next
  5    from employees
  6   where department_id <= 40
  7   order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY SALARY_NEXT
----------- ------------------- ---------- -----------
        203 2002-06-07 00:00:00       6500       11000
        114 2002-12-07 00:00:00      11000        3100
        115 2003-05-18 00:00:00       3100        4400
        200 2003-09-17 00:00:00       4400       13000
        201 2004-02-17 00:00:00      13000        2800
        117 2005-07-24 00:00:00       2800        6000
        202 2005-08-17 00:00:00       6000        2900
        116 2005-12-24 00:00:00       2900        2600
        118 2006-11-15 00:00:00       2600        2500
        119 2007-08-10 00:00:00       2500           0  --0填充

已选择10行。

over()部分增加Group by分组。

SQL> select employee_id,
  2         hire_date,
  3         job_id,
  4         salary,
  5         lead(salary, 1, null) over(partition by job_id order by hire_date) as salary_next
  6    from employees
  7   where department_id <= 40
  8   order by hire_date;

EMPLOYEE_ID HIRE_DATE           JOB_ID                   SALARY SALARY_NEXT
----------- ------------------- -------------------- ---------- -----------
        203 2002-06-07 00:00:00 HR_REP                     6500
        114 2002-12-07 00:00:00 PU_MAN                    11000
        115 2003-05-18 00:00:00 PU_CLERK                   3100        2800
        200 2003-09-17 00:00:00 AD_ASST                    4400
        201 2004-02-17 00:00:00 MK_MAN                    13000
        117 2005-07-24 00:00:00 PU_CLERK                   2800        2900
        202 2005-08-17 00:00:00 MK_REP                     6000
        116 2005-12-24 00:00:00 PU_CLERK                   2900        2600
        118 2006-11-15 00:00:00 PU_CLERK                   2600        2500
        119 2007-08-10 00:00:00 PU_CLERK                   2500

IGNORE NULL获取到的结果

SQL> select employee_id,hire_date,salary,lead(salary,1,null) ignore nulls over (order by employee_id) as salary_next
  2  from t_lag
  3  order by employee_id;

EMPLOYEE_ID HIRE_DATE               SALARY SALARY_NEXT
----------- ------------------- ---------- -----------
        114 2002-12-07 00:00:00      11000        3100
        115 2003-05-18 00:00:00       3100        2900
        116 2005-12-24 00:00:00       2900        2800
        117 2005-07-24 00:00:00       2800        2500
        118 2006-11-15 00:00:00                   2500
        119 2007-08-10 00:00:00       2500        4400
        200 2003-09-17 00:00:00       4400       13000
        201 2004-02-17 00:00:00      13000        6000
        202 2005-08-17 00:00:00       6000        6500
        203 2002-06-07 00:00:00       6500
发表在 SQL | 留下评论

Oracle analytic function-LAG

LAG是一个分析函数,从Oracle 8i开始引入。使用这个函数可以一次性从表中获取多条数据而不需要将此表进行自连接,函数返回表中的当前行的前offset行的指定列值。

用法:
LAG
{ ( value_expr [, offset [, default]]) [ { RESPECT | IGNORE } NULLS ]
| ( value_expr [ { RESPECT | IGNORE } NULLS ] [, offset [, default]] )
}
OVER ([ query_partition_clause ] order_by_clause)

• value_expr 值表达式,通常是字段,也可是是表达式。但不能是分析函数
• offset 偏移,可选参数,是表中与当前行的物理偏移度,如果省略,默认值为1,既当前行的前offset行。
• default 可选参数,如果offset参数指向超出了表的范围,就返回这个值,默认值为null。
• over 理解成在一个结果集范围内,如果后面的 partition by为空,那么就是当前的结果集范围内。
• query_partition_clause 分区语句,对结果集合分区的语句,是可选的,如果没有就是所有的一个分区。
• Order_by_clause 排序语句 必须需要 ,形如order by xxx desc/asc

Oracle 11g对lag函数进行了增强,加入了{RESPECT | IGNORE} NULLS。

原始数据

SQL> select employee_id,hire_date,salary from employees where department_id <= 40 order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY
----------- ------------------- ----------
        203 2002-06-07 00:00:00       6500
        114 2002-12-07 00:00:00      11000
        115 2003-05-18 00:00:00       3100
        200 2003-09-17 00:00:00       4400
        201 2004-02-17 00:00:00      13000
        117 2005-07-24 00:00:00       2800
        202 2005-08-17 00:00:00       6000
        116 2005-12-24 00:00:00       2900
        118 2006-11-15 00:00:00       2600
        119 2007-08-10 00:00:00       2500

采用lag函数的默认参数分析

SQL> select employee_id,hire_date,salary,lag(salary) over (order by hire_date) as salary_prev
  2  from employees
  3  where department_id <= 40
  4  order by hire_date;

EMPLOYEE_ID HIRE_DATE              SALARY   SALARY_PREV
----------- ------------------- ---------- -----------
        203 2002-06-07 00:00:00       6500        NULL
        114 2002-12-07 00:00:00      11000        6500
        115 2003-05-18 00:00:00       3100       11000
        200 2003-09-17 00:00:00       4400        3100
        201 2004-02-17 00:00:00      13000        4400
        117 2005-07-24 00:00:00       2800       13000
        202 2005-08-17 00:00:00       6000        2800
        116 2005-12-24 00:00:00       2900        6000
        118 2006-11-15 00:00:00       2600        2900
        119 2007-08-10 00:00:00       2500        2600

当对Lag函数传入offset为2,default_value为200的结果

SQL> select employee_id,hire_date,salary,lag(salary,2,200) over (order by hire_date) as salary_prev
  2  from employees
  3  where department_id <= 40
  4  order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY SALARY_PREV
----------- ------------------- ---------- -----------
        203 2002-06-07 00:00:00       6500         200
        114 2002-12-07 00:00:00      11000         200
        115 2003-05-18 00:00:00       3100        6500
        200 2003-09-17 00:00:00       4400       11000
        201 2004-02-17 00:00:00      13000        3100
        117 2005-07-24 00:00:00       2800        4400
        202 2005-08-17 00:00:00       6000       13000
        116 2005-12-24 00:00:00       2900        2800
        118 2006-11-15 00:00:00       2600        6000
        119 2007-08-10 00:00:00       2500        2900

对lag函数加入分组语句

SQL> select employee_id,hire_date,job_id,salary,lag(salary,1,null) over (partition by job_id order by hire_date) as salary_prev
  2  from employees
  3  where department_id <= 40
  4  order by hire_date;

EMPLOYEE_ID HIRE_DATE           JOB_ID                   SALARY SALARY_PREV
----------- ------------------- -------------------- ---------- -----------
        203 2002-06-07 00:00:00 HR_REP                     6500
        114 2002-12-07 00:00:00 PU_MAN                    11000
        115 2003-05-18 00:00:00 PU_CLERK                   3100
        200 2003-09-17 00:00:00 AD_ASST                    4400
        201 2004-02-17 00:00:00 MK_MAN                    13000
        117 2005-07-24 00:00:00 PU_CLERK                   2800        3100
        202 2005-08-17 00:00:00 MK_REP                     6000
        116 2005-12-24 00:00:00 PU_CLERK                   2900        2800
        118 2006-11-15 00:00:00 PU_CLERK                   2600        2900
        119 2007-08-10 00:00:00 PU_CLERK                   2500        2600

对11g中lag函数增强部分{RESPECT | IGNORE} NULLS进行测试。

create table t_lag as select employee_id,hire_date,job_id,salary  from employees where department_id <= 40;
select employee_id,hire_date,job_id,salary  from t_lag;
update t_lag set salary = null where employee_id = 118;
commit;

更新后的记录如下

SQL> select employee_id,hire_date,job_id,salary  from t_lag;

EMPLOYEE_ID HIRE_DATE           JOB_ID                   SALARY
----------- ------------------- -------------------- ----------
        200 2003-09-17 00:00:00 AD_ASST                    4400
        201 2004-02-17 00:00:00 MK_MAN                    13000
        202 2005-08-17 00:00:00 MK_REP                     6000
        114 2002-12-07 00:00:00 PU_MAN                    11000
        115 2003-05-18 00:00:00 PU_CLERK                   3100
        116 2005-12-24 00:00:00 PU_CLERK                   2900
        117 2005-07-24 00:00:00 PU_CLERK                   2800
        118 2006-11-15 00:00:00 PU_CLERK
        119 2007-08-10 00:00:00 PU_CLERK                   2500
        203 2002-06-07 00:00:00 HR_REP                     6500

EMPLOYEE_ID=118的salary为空,继续执行上述的lag分析函数

SQL> select employee_id,hire_date,salary,lag(salary,1) over (order by hire_date) as salary_prev
  2  from t_lag
  3  order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY SALARY_PREV
----------- ------------------- ---------- -----------
        203 2002-06-07 00:00:00       6500
        114 2002-12-07 00:00:00      11000        6500
        115 2003-05-18 00:00:00       3100       11000
        200 2003-09-17 00:00:00       4400        3100
        201 2004-02-17 00:00:00      13000        4400
        117 2005-07-24 00:00:00       2800       13000
        202 2005-08-17 00:00:00       6000        2800
        116 2005-12-24 00:00:00       2900        6000
        118 2006-11-15 00:00:00                   2900
        119 2007-08-10 00:00:00       2500

118的记录为空造成LAG后119的记录也为空。系统默认的为RESPECT NULLS,修改为IGNORE NULLS后

SQL> select employee_id,hire_date,salary,lag(salary,1,null) ignore nulls over (order by hire_date) as salary_prev
  2  from t_lag
  3  order by hire_date;

EMPLOYEE_ID HIRE_DATE               SALARY SALARY_PREV
----------- ------------------- ---------- -----------
        203 2002-06-07 00:00:00       6500
        114 2002-12-07 00:00:00      11000        6500
        115 2003-05-18 00:00:00       3100       11000
        200 2003-09-17 00:00:00       4400        3100
        201 2004-02-17 00:00:00      13000        4400
        117 2005-07-24 00:00:00       2800       13000
        202 2005-08-17 00:00:00       6000        2800
        116 2005-12-24 00:00:00       2900        6000
        118 2006-11-15 00:00:00                   2900
        119 2007-08-10 00:00:00       2500        2900

新增的IGNORE NULLS功能,可以忽略NULL结果,去寻找另一个满足条件的结果。

发表在 SQL | 留下评论

LISTAGG

在Oracle 11Gr2,Oracle的分析/聚合函数得到进一步的增强,本文对新增的LISTAGG函数进行一些实验和总结。

用法:
LISTAGG(measure_expr [, ‘delimiter’]) WITHIN GROUP (order_by_clause) [OVER query_partition_clause]

listagg虽然是聚合函数,但可以提供分析功能(比如可选的OVER()子句)。使用listagg中,下列中的元素是必须的:
@需要聚合的列或者表达式
@WITH GROUP 关键词
@分组中的ORDER BY子句,当排序无关紧要时可以使用NULL代替

聚合函数功能

下面的测试数据使用11.2.0.4版本下的HR演示数据,实现对FIRST_NAME按逗号分隔进行聚合。

SQL> SELECT DEPARTMENT_ID,FIRST_NAME FROM HR.EMPLOYEES WHERE DEPARTMENT_ID <= 40;
DEPARTMENT_ID FIRST_NAME
------------- ----------------------------------------
           10 Jennifer
           20 Michael
           20 Pat
           30 Den
           30 Alexander
           30 Shelli
           30 Sigal
           30 Guy
           30 Karen
           40 Susan

已选择10行。
SQL> SELECT DEPARTMENT_ID,LISTAGG(FIRST_NAME,',') WITHIN GROUP (ORDER BY FIRST_NAME) AS LIST_FIRSTNAME
  2  FROM EMPLOYEES WHERE DEPARTMENT_ID <= 40 GROUP BY DEPARTMENT_ID;

DEPARTMENT_ID LIST_FIRSTNAME
------------- --------------------------------------------------
           10 Jennifer
           20 Michael,Pat
           30 Alexander,Den,Guy,Karen,Shelli,Sigal
           40 Susan

分析函数功能

SQL> Select DEPARTMENT_ID,
  2         employee_id,
  3         Listagg(FIRST_NAME, ',') Within Group(Order By employee_id) Over(Partition By DEPARTMENT_ID) As list_firstname
  4    From Employees
  5   Where Department_Id <= 40; 
DEPARTMENT_ID EMPLOYEE_ID LIST_FIRSTNAME 
------------- ----------- -------------------     
10         200 Jennifer            
20         201 Michael,Pat            
20         202 Michael,Pat            
30         114 Den,Alexander,Shelli,Sigal,Guy,Karen           
30         115 Den,Alexander,Shelli,Sigal,Guy,Karen            
30         116 Den,Alexander,Shelli,Sigal,Guy,Karen            
30         117 Den,Alexander,Shelli,Sigal,Guy,Karen            
30         118 Den,Alexander,Shelli,Sigal,Guy,Karen            
30         119 Den,Alexander,Shelli,Sigal,Guy,Karen            
40         203 Susan 
已选择10行。
SQL> SELECT *
  2  FROM   TABLE(DBMS_XPLAN.DISPLAY(format => 'BASIC +ROWS +PROJECTION'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
Plan hash value: 2095165354

------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                   |    10 |
|   1 |  WINDOW SORT                 |                   |    10 |
|   2 |   TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    10 |
|   3 |    INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |    10 |
------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=2) "DEPARTMENT_ID"[NUMBER,22], "EMPLOYEE_ID"[NUMBER,22],
       "EMPLOYEES".ROWID[ROWID,10], "FIRST_NAME"[VARCHAR2,20],
       LISTAGG("FIRST_NAME",',') WITHIN GROUP ( ORDER BY "EMPLOYEE_ID") OVER (
       PARTITION BY "DEPARTMENT_ID")[4000]
   2 - "EMPLOYEES".ROWID[ROWID,10], "EMPLOYEE_ID"[NUMBER,22],
       "FIRST_NAME"[VARCHAR2,20], "DEPARTMENT_ID"[NUMBER,22]
   3 - "EMPLOYEES".ROWID[ROWID,10], "DEPARTMENT_ID"[NUMBER,22]
发表在 SQL | 留下评论

v$dead_cleanup on 12.2.0.1

Oracle在11.2.0.4和后续的版本分别引入了V$DEAD_CLEANUP监控dead processes、killed sessions。按照官方文档对V$DEAD_CLEANUP的描述
“V$DEAD_CLEANUP shows the dead processes and killed sessions present in the instance and their cleanup status.”。
本文在Oracle 12.2.0.1环境下进行测试了2次,一次视图的状态信息为RESOURCES FREED,一次为CLEANUP PENDING。
使用json用户登录数据库,获取到的session信息如下

SQL> col username for a10
SQL> select sid,serial#,paddr,saddr,username,status from v$session where username = 'JSON';

SID SERIAL# PADDR SADDR USERNAME STATUS
---------- ---------- ---------------- ---------------- ---------- ----------------
138 63930 000000007DCC8E48 000000007C147988 JSON INACTIVE

SQL> set linesize 200
SQL> col tracefile for a70
SQL> select addr,pid,spid,stid,execution_type,tracefile from v$process where addr = '000000007DCC8E48';

ADDR PID SPID STID EXECUTION_ TRACEFILE
---------------- ---------- ---------- -------- ---------- ----------------------------------------------------------------------
000000007DCC8E48 65 19675 19675 PROCESS /orasoft/oracle/diag/rdbms/ywdb/ywdb/trace/ywdb_ora_19675.trc

Kill杀掉会话

SQL> alter system kill session '138,63930';

系统已更改。

SQL> select sid,serial#,paddr,saddr,username,status from v$session where username = 'JSON';

SID SERIAL# PADDR SADDR USERNAME STATUS
---------- ---------- ---------------- ---------------- ---------- ----------------
138 63930 000000007DCC8E48 000000007C147988 JSON KILLED

SQL> col type for a20
SQL> col cleanup_owner for a30
SQL> col state for a30

SQL> select type,paddr,saddr,root_addr,cleanup_owner,state,dead_time,con_id from V$DEAD_CLEANUP t;

TYPE PADDR SADDR ROOT_ADDR CLEANUP_OWNER STATE DEAD_TIME CON_ID
-------------------- ---------------- ---------------- ---------------- ------------------------------ 
KILLED SESSION 000000007DCC8E48 000000007C147988 000000007C147988 OWNER PROCESS RESOURCES FREED 102 3

SQL> select addr,pid,spid,stid,execution_type,tracefile from v$process where addr = '000000007DCC8E48';

未选定行

进程信息已经从v$process中删除

再次进行测试

SQL> alter system kill session '134,10324';

系统已更改。

SQL> col name for a5
SQL> col name for a8
SQL> select sid,serial#,username,status,saddr,paddr from v$session where username = 'JSON';

       SID    SERIAL# USERNAME STATUS           SADDR            PADDR
---------- ---------- -------- ---------------- ---------------- ----------------
       134      10324 JSON     KILLED           000000007C151068 000000007DCA3A08
SQL> col type for a25
SQL> col CLEANUP_OWNER for a25
SQL> col state for a25
SQL> select type,paddr,saddr,root_addr,cleanup_owner,state,dead_time,con_id from V$DEAD_CLEANUP t;

TYPE                      PADDR            SADDR            ROOT_ADDR        CLEANUP_OWNER             STATE                      DEAD_TIME     CON_ID
------------------------- ---------------- ---------------- ---------------- ------------------------- ------------------------- ---------- ----------
KILLED SESSION            000000007DCA3A08 000000007C151068 000000007C151068 OWNER PROCESS             CLEANUP PENDING                  274          3

SQL> select * from V$CLEANUP_PROCESS;

NAME     PADDR            SADDR            STATE                     DEAD_IN_CLEANUP  CLEANUP_TIME TIME_SINCE_LAST_CLEANUP NUM_CLEANED     CON_ID
-------- ---------------- ---------------- ------------------------- ---------------- ------------ ----------------------- ----------- ----------
PMON     000000007DC87AD8 000000007C28A900 IDLE                      00                          0                 1889383           0          0
CLMN     000000007DC88B68 000000007C3B13D8 IDLE                      00                          0                 1889383           0          0

SQL> select type,paddr,saddr,root_addr,cleanup_owner,state,dead_time,con_id from V$DEAD_CLEANUP t;

TYPE                      PADDR            SADDR            ROOT_ADDR        CLEANUP_OWNER             STATE                      DEAD_TIME     CON_ID
------------------------- ---------------- ---------------- ---------------- ------------------------- ------------------------- ---------- ----------
KILLED SESSION            000000007DCA3A08 000000007C151068 000000007C151068 OWNER PROCESS             CLEANUP PENDING                  587          3

采用event=’10246 trace name context forever, level 10′ 跟踪pmon和clmn进程并没有得到有趣的信息。

Footnote:
对session进行kill,在alert日志文件中会进行记录信息

BLUE(3):Opening pdb with no Resource Manager plan active
Pluggable database BLUE opened read write
Completed: alter pluggable database blue open
2017-03-14T13:00:22.029236+08:00
BLUE(3):KILL SESSION for sid=(266, 1517):
BLUE(3):  Reason = alter system kill session
BLUE(3):  Mode = KILL SOFT -/-/-
BLUE(3):  Requestor = USER (orapid = 63, ospid = 14620, inst = 1)
BLUE(3):  Owner = Process: USER (orapid = 62, ospid = 14611)
BLUE(3):  Result = ORA-0
2017-03-14T13:12:41.833469+08:00
发表在 12c | 留下评论