您好,登錄后才能下訂單哦!
版本有點老,但還是很有用
[@more@]總結logminer使用及各種問題處理在前人的工作基礎上,結合自己使用過程中出現的問題及解決方法給個一個新版的<<理解和使用Oracle 8i分析工具LogMiner>>:
理解和使用Oracle 8i分析工具LogMiner
Oracle LogMiner 是Oracle公司從產品8i以后提供的一個實際非常有用的分析工具,使用該工具可以輕松獲得Oracle 重作日志文件(歸檔日志文件)中的具體內容,特別是,該工具可以分析出所有對于數據庫操作的DML(insert、update、delete等)語句,另外還可分析得到一些必要的回滾SQL語句。該工具特別適用于調試、審計或者回退某個特定的事務。
LogMiner分析工具實際上是由一組PL/SQL包和一些動態視圖(Oracle8i內置包的一部分)組成,它作為Oracle數據庫的一部分來發布,是8i產品提供的一個完全免費的工具。但該工具和其他Oracle內建工具相比使用起來顯得有些復雜,主要原因是該工具沒有提供任何的圖形用戶界面(GUI)。本文將詳細介紹如何安裝以及使用該工具。
一、LogMiner的用途
日志文件中存放著所有進行數據庫恢復的數據,記錄了針對數據庫結構的每一個變化,也就是對數據庫操作的所有DML語句。
在Oracle 8i之前,Oracle沒有提供任何協助數據庫管理員來讀取和解釋重作日志文件內容的工具。系統出現問題,對于一個普通的數據管理員來講,唯一可以作的工作就是將所有的log文件打包,然后發給Oracle公司的技術支持,然后靜靜地等待Oracle 公司技術支持給我們最后的答案。然而從8i以后,Oracle提供了這樣一個強有力的工具-LogMiner。
LogMiner 工具即可以用來分析在線,也可以用來分析離線日志文件,即可以分析本身自己數據庫的重作日志文件,也可以用來分析其他數據庫的重作日志文件。
總的說來,LogMiner工具的主要用途有:
1. 跟蹤數據庫的變化:可以離線的跟蹤數據庫的變化,而不會影響在線系統的性能。
2. 回退數據庫的變化:回退特定的變化數據,減少point-in-time recovery的執行。
3. 優化和擴容計劃:可通過分析日志文件中的數據以分析數據增長模式。
二、安裝LogMiner
要安裝LogMiner工具,必須首先要運行下面這樣兩個腳本:
l $ORACLE_HOME/rdbms/admin/dbmslm.sql
2 $ORACLE_HOME/rdbms/admin/dbmslmd.sql.
這兩個腳本必須均以SYS用戶身份運行。其中第一個腳本用來創建DBMS_LOGMNR包,該包用來分析日志文件。第二個腳本用來創建DBMS_LOGMNR_D包,該包用來創建數據字典文件。
在windows平臺的安裝示例:
SQL> @c:oracleora81rdbmsadmindbmslm.sql
程序包已創建。
授權成功。
SQL> @c:oracleora81rdbmsadmindbmslmd.sql
程序包已創建。
程序包主體已創建。
沒有錯誤。
授權成功。
三、使用LogMiner工具
下面將詳細介紹如何使用LogMiner工具。
1、創建數據字典文件(data-dictionary)
前面已經談到,LogMiner工具實際上是由兩個新的PL/SQL內建包((DBMS_LOGMNR 和 DBMS_ LOGMNR_D)和四個V$動態性能視圖(視圖是在利用過程DBMS_LOGMNR.START_LOGMNR啟動LogMiner時創建)組成:
1、v$logmnr_contents 它給出日志分析的結果信息。
2、v$logmnr_dictionary 因logmnr可以有多個字典文件,該視圖用于顯示這方面信息。
3、v$logmnr_parameters 它用于顯示logmnr的參數。
4、v$logmnr_logs 它用于顯示用于分析的日志列表信息。
在使用LogMiner工具分析redo log文件之前,可以使用DBMS_LOGMNR_D 包將數據字典導出為一個文本文件。該字典文件是可選的,但是如果沒有它,LogMiner解釋出來的語句中關于數據字典中的部分(如表名、列名等)和數值都將是16進制的形式,我們是無法直接理解的。例如,下面的sql語句:
INSERT INTO dm_dj_swry (rydm, rymc) VALUES (00005, '張三');
LogMiner解釋出來的結果將是下面這個樣子,
insert into Object#308(col#1, col#2) values (hextoraw('c30rte567e436'), hextoraw('4a6f686e20446f65'));
創建數據字典的目的就是讓LogMiner引用涉及到內部數據字典中的部分時為他們實際的名字,而不是系統內部的16進制。數據字典文件是一個文本文件,使用包DBMS_LOGMNR_D來創建。如果我們要分析的數據庫中的表有變化,影響到庫的數據字典也發生變化,這時就需要重新創建該字典文件。另外一種情況是在分析另外一個數據庫文件的重作日志時,也必須要重新生成一遍被分析數據庫的數據字典文件。
字典文件用于存放表及對象ID號之間的對應關系。當使用字典文件時,它會在表名和對象ID號之間建立一一對應的關系。因此需要注意,如果用戶建立了新表之后,并且將來可能會對該表進行日志分析,那么就需要重新建立字典文件,以將其對象名及對象ID號存放到字典文件.
首先在init.ora初始化參數文件中,指定數據字典文件的位置,也就是添加一個參數UTL_FILE_DIR,該參數值為服務器中放置數據字典文件的目錄。如:
UTL_FILE_DIR = (e:Oraclelogs)
重新啟動數據庫,使新加的參數生效,然后創建數據字典文件:
SQL> CONNECT SYS
SQL> EXECUTE dbms_logmnr_d.build(
dictionary_filename => ' v816dict.ora',
dictionary_location => 'e:oraclelogs');
注意:參數dictionary_filename用以指定字典文件的文件名;參數 dictionary_location用于指定存放字典文件所在的目錄,該目錄必須與初始化參數UTL_FILE_DIR的值一致。
另外注意,生成字典文件的步驟可能會出現下標超出限制問題:
SQL> EXECUTE dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location => 'd:oradict');
BEGIN dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location => 'd:oradict')
*
ERROR 位于第 1 行:
ORA-06532: 下標超出限制
ORA-06512: 在"SYS.DBMS_LOGMNR_D", line 793
ORA-06512: 在line 1
解決方法:
將TYPE col_desc_array IS VARRAY(513) OF col_description;
改成:
TYPE col_desc_array IS VARRAY(713) OF col_description;
保存文件,然后執行一遍腳本:
SQL> @c:oracleora81rdbmsadmindbmslmd.sql
程序包已創建。
程序包主體已創建。
沒有錯誤。
授權成功。
再重新編譯DBMS_LOGMNR_D包:
SQL> alter package DBMS_LOGMNR_D compile body;
程序包主體已變更。
最后重新執行dbms_logmnr_d.build:
SQL> EXECUTE dbms_logmnr_d.build(dictionary_filename => 'v817dict.ora',dictionary_location =>'d:ora
dict');
PL/SQL 過程已成功完成。
字典文件正常生成,生成的字典文件和提示下標超出限制的情況下生成的字典文件大小一樣。
另請注意有的文檔中說:
如果指定的字典文件名dict.ora已經存在,則應在執行此操作前將其徹底刪除(從垃圾箱中刪除),否則執行該過程將失敗。
實踐證明這要說法是沒有根據的,在實際操作前并沒有將原來生成的字典文件刪除掉,但字典文件是正常生成了。
2、創建要分析的日志文件列表
Oracle的重作日志分為兩種,在線(online)和離線(offline)歸檔日志文件,下面就分別來討論這兩種不同日志文件的列表創建。
(1)分析在線重作日志文件
A. 創建列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:Oracleoradatasxfredo01.log',
Options=>dbms_logmnr.new);
B. 添加其他日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:Oracleoradatasxfredo02.log',
Options=>dbms_logmnr.addfile);
(2)分析離線日志文件
A.創建列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' E:OracleoradatasxfarchiveARCARC09108.001',
Options=>dbms_logmnr.new);
B.添加另外的日志文件到列表
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' E:OracleoradatasxfarchiveARCARC09109.001',
Options=>dbms_logmnr.addfile);
關于這個日志文件列表中需要分析日志文件的個數完全由你自己決定,但這里建議最好是每次只添加一個需要分析的日志文件,在對該文件分析完畢后,再添加另外的文件。
和添加日志分析列表相對應,使用過程 'dbms_logmnr.removefile' 也可以從列表中移去一個日志文件。下面的例子移去上面添加的日志文件e:Oracleoradatasxfredo02.log。
SQL> EXECUTE dbms_logmnr.add_logfile(
LogFileName=>' e:Oracleoradatasxfredo02.log',
Options=>dbms_logmnr. REMOVEFILE);
可以通過動態性能視圖v$logmnr_logs查看日志分析列表中有哪些待分析的日志文件。
創建了要分析的日志文件列表,下面就可以對其進行分析了。
3、使用LogMiner進行日志分析
(1)無限制條件
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName=>' e:oraclelogs v816dict.ora ');
(2)有限制條件
通過對過程DBMS_ LOGMNR.START_LOGMNR中幾個不同參數的設置(參數含義見表1),可以縮小要分析日志文件的范圍。通過設置起始時間和終止時間參數我們可以限制只分析某一時間范圍的日志。如下面的例子,我們僅僅分析2001年9月18日的日志:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ' e:oraclelogs v816dict.ora ',
StartTime => to_date('2001-9-18 00:00:00','YYYY-MM-DD HH24:MI:SS')
EndTime => to_date(''2001-9-18 23:59:59','YYYY-MM-DD HH24:MI:SS '));
注意:此過程能否執行成功的關鍵是給出的starttime(起始時間)和endtime(終止時
間)應在一個有效的范圍內。特別是終止時間,應小于或等于歸檔日志的建立時間;如果大于
歸檔日志的建立時間,則不能執行分析過程。分析多個歸檔日志時,這些歸檔日志最好是連續
也可以通過設置起始SCN和截至SCN來限制要分析日志的范圍:
SQL> EXECUTE dbms_logmnr.start_logmnr(
DictFileName => ' e:oraclelogs v816dict.ora ',
StartScn => 20,
EndScn => 50);
表1 DBMS_LOGMNR.START__LOGMNR過程參數含義
參數 參數類型 默認值 含義
StartScn 數字型(Number) 0 分析重作日志中SCN≥StartScn日志文件部分
EndScn 數字型(Number) 0 分析重作日志中SCN≤EndScn日志文件部分
StartTime 日期型(Date) 1998-01-01 分析重作日志中時間戳≥StartTime的日志文件部分
EndTime 日期型(Date) 2988-01-01 分析重作日志中時間戳≤EndTime的日志文件部分
DictFileName 字符型(VARCHAR2) 字典文件,該文件包含一個數據庫目錄的快照。使用該文件可以使得到的分析結果是可以理解的文本形式,而非系統內部的16進制
Options BINARY_INTEGER 0 系統調試參數,實際很少使用
在執行分析的時候如果提示無效的月份,可以按照下面的步驟去嘗試:
alter session set nls_date_language='AMERICAN';
alter session set nls_date_format='DD-MON-YYYY HH:MI:SS';
執行包(exec dbms_logmnr.start_logmnr(dictfilename=>'');
一定要指名參數dictfilename,因為這個包有五個默認的參數,不指名會默認為第一個。
4、觀察分析結果(v$logmnr_contents)
到現在為止,我們已經分析得到了重作日志文件中的內容。動態性能視圖v$logmnr_contents包含LogMiner分析得到的所有的信息。
SELECT sql_redo FROM v$logmnr_contents;
如果我們僅僅想知道某個用戶對于某張表的操作,可以通過下面的SQL查詢得到,該查詢可以得到用戶DB_ZGXT對表SB_DJJL所作的一切工作。
SQL> SELECT sql_redo FROM v$logmnr_contents WHERE username='DB_ZGXT' AND seg_name='SB_DJJL';
需要強調一點的是,視圖v$logmnr_contents中的分析結果僅在我們運行過程'dbms_logmrn.start_logmnr'這個會話的生命期中存在。這是因為所有的LogMiner存儲都在PGA內存中,所有其他的進程是看不到它的,同時隨著進程的結束,分析結果也隨之消失。
最后,使用過程DBMS_LOGMNR.END_LOGMNR終止日志分析事務,此時PGA內存區域被清除,分析結果也隨之不再存在
5、結束分析:使用EXCUTE DBMS_LOGMNR.END_LOGMNR。
四、其他注意事項
們可以利用LogMiner日志分析工具來分析其他數據庫實例產生的重作日志文件,而不僅僅用來分析本身安裝LogMiner的數據庫實例的redo logs文件。使用LogMiner分析其他數據庫實例時,有幾點需要注意:
1. LogMiner必須使用被分析數據庫實例產生的字典文件,而不是安裝LogMiner的數據庫產生的字典文件,另外必須保證安裝LogMiner數據庫的字符集和被分析數據庫的字符集相同。
2. 被分析數據庫平臺必須和當前LogMiner所在數據庫平臺一樣,也就是說如果我們要分析的文件是由運行在UNIX平臺上的Oracle 8i產生的,那么也必須在一個運行在UNIX平臺上的Oracle實例上運行LogMiner,而不能在其他如Microsoft NT上運行LogMiner。當然兩者的硬件條件不一定要求完全一樣。
3. LogMiner日志分析工具僅能夠分析Oracle 8以后的產品,不過它可以分析Oracle8的日志。對于8以前的產品,該工具也無能為力。 另外, Oracle8i只能對DML操作進行分析,從Oracle9i開始不僅可以分析DML操作,而且也可以分析DDL操作。在Oracle9i中可使用如下語句查詢DDL操作及具體的操作時間:
SQL>select sql_redo
2 from v$logmnr_contents
3 where sql_redo like '%create%' or sql_redo like '%CREATE%';
LogMiner不支持索引組織表、Long、LOB及集合類型。
MTS的環境也不能使用LogMiner.
五、結語
LogMiner對于數據庫管理員(DBA)來講是個功能非常強大的工具,也是在日常工作中經常要用到的一個工具,借助于該工具,可以得到大量的關于數據庫活動的信息。其中一個最重要的用途就是不用全部恢復數據庫就可以恢復數據庫的某個變化。另外,該工具還可用來監視或者審計用戶的活動,如你可以利用LogMiner工具察看誰曾經修改了那些數據以及這些數據在修改前的狀態。我們也可以借助于該工具分析任何Oracle 8及其以后版本產生的重作日志文件。另外該工具還有一個非常重要的特點就是可以分析其他數據庫的日志文件。總之,該工具對于數據庫管理員來講,是一個非常有效的工具,深刻理解及熟練掌握該工具,對于每一個數據庫管理員的實際工作是非常有幫助的。
參考文檔:
The Oracle8i LogMiner Utility
PURPOSE
This paper details the mechanics of what LogMiner does, as well as detailing
the commands and environment it uses.
SCOPE & APPLICATION
For DBAs requiring further information about LogMiner.
The ability to provide a readable interface to the redo logs has been asked
for by customers for a long time. The ALTER SYTSTEM DUMP LOGFILE interface
has been around for a long time, though its usefulness outside Support is
limited. There have been a number of third party products, e.g. BMC's PATROL
DB-Logmaster (SQL*Trax as was), which provide some functionality in this
area. With Oracle release 8.1 there is a facility in the Oracle kernel to do
the same. LogMiner allows the DBA to audit changes to data and performs
analysis on the redo to determine trends, aid in capacity planning,
Point-in-time Recovery etc.
RELATED DOCUMENTS
[NOTE:117580.1] ORA-356, ORA-353, & ORA-334 Errors When Mining Logs with
Different DB_BLOCK_SIZE
Oracle8i - 8.1 LogMiner:
=========================
1. WHAT DOES LOGMINER DO?
=========================
LogMiner can be used against online or archived logs from either the
'current' database or a 'foreign' database. The reason for this is that it
uses an external dictionary file to access meta-data, rather than the
'current' data dictionary.
It is important that this dictionary file is kept in step with the database
which is being analyzed. If the dictionary used is out of step from the redo
then analysis will be considerably more difficult. Building the external
dictionary will be discussed in detail in section 3.
LogMiner scans the log/logs it is interested in, and generates, using the
dictionary file meta-data, a set of SQL statements which would have the same
effect on the database as applying the corresponding redo record.
LogMiner prints out the 'Final' SQL that would have gone against the
database. For example:
Insert into Table x Values ( 5 );
Update Table x set COLUMN=newvalue WHERE ROWID='<>'
Delete from Table x WHERE ROWID='<>' AND COLUMN=value AND COLUMN=VALUE
We do not actually see the SQL that was issued, rather an executable SQL
statement that would have the same EFFECT. Since it is also stored in the
same redo record, we also generate the undo column which would be necessary
to roll this change out.
For SQL which rolls back, no undo SQL is generated, and the rollback flag is
set. An insert followed by a rollback therefore looks like:
REDO UNDO ROLLBACK
insert sql Delete sql 0
delete sql
Because it operates against the physical redo records, multirow operations
are not recorded in the same manner e.g. DELETE FROM EMP WHERE DEPTNO=30
might delete 100 rows in the SALES department in a single statement, the
corresponding LogMiner output would show one row of output per row in the
database.
2. WHAT IT DOES NOT DO
======================
1. 'Trace' Application SQL - use SQL_Trace/10046
Since LogMiner only generates low-level SQL, not what was issued, you
cannot use LogMiner to see exactly what was being done based on the SQL.
What you can see, is what user changed what data at what time.
2. 'Replicate' an application
LogMiner does not cover everything. Also, since DDL is not supported
(the insert into the tab$ etc. is, however the create table is not).
3. Access data dictionary SQL In a visible form
Especially UPDATE USER$ SET PASSWORD=
Other Known Current Limitations
===============================
LogMiner cannot cope with Objects.
LogMiner cannot cope with Chained/Migrated Rows.
LogMiner produces fairly unreadable output if there is no record of the
table in the dictionary file. See below for output.
The database where the analysis is being performed must have a block size
of at least equal to that of the originating database. See [NOTE:117580.1].
3. FUNCTIONALITY
================
The LogMiner feature is made up of three procedures in the LogMiner
(dbms_logmnr) package, and one in the Dictionary (dbms_logmnr_d).
These are built by the following scripts: (Run by catproc)
$ORACLE_HOME/rdbms/admin/dbmslogmnrd.sql
$ORACLE_HOME/rdbms/admin/dbmslogmnr.sql
$ORACLE_HOME/rdbms/admin/prvtlogmnr.plb
since 8.1.6:
$ORACLE_HOME/rdbms/admin/dbmslmd.sql
$ORACLE_HOME/rdbms/admin/dbmslm.sql
$ORACLE_HOME/rdbms/admin/prvtlm.plb
1. dbms_logmnr_d.build
This procedure builds the dictionary file used by the main LogMiner
package to resolve object names, and column datatypes. It should be
generated relatively frequently, since otherwise newer objects will not
be recorded.
It is possible to generate a Dictionary file from an 8.0.database and
use it to Analyze Oracle 8.0 redo logs. In order to do this run
"dbmslogmnrd.sql" against the 8.0 database, then follow the procedure as
below. All analysis of the logfiles will have to take place while
connected to an 8.1 database since dbms_logmnr cannot operate against
Oracle 8.0 because it uses trusted callouts.
Any redo relating to tables which are not included in the dictionary
file are dumped RAW. Example: If LogMiner cannot resolve the Table and
column references, then the following is output: (insert statement)
insert into UNKNOWN.objn:XXXX(Col[x],....) VALUES
( HEXTORAW('xxxxxx'), HEXTORAW('xxxxx')......)
PARAMETERS
==========
1. The name of the dictionary file you want to produce.
2. The name of the directory where you want the file produced.
The Directory must be writeable by the server i.e. included in
UTL_FILE_DIR path.
EXAMPLE
=======
BEGIN
dbms_logmnr_d.build(
dictionary_filename=> 'miner_dictionary.dic',
dictionary_location => '/export/home/sme81/aholland/testcases
/logminer'
);
END;
/
The dbms_logmnr package actually performs the redo analysis.
2. dbms_logmnr.add_logfile
This procedure registers the logfiles to be analyzed in this session. It
must be called once for each logfile. This populates the fixed table
X$logmnr_logs (v$logmnr_logs) with a row corresponding to the logfile.
Parameters
===========
1. The logfile to be analyzed.
2. Option
DBMS_LOGMNR.NEW (SESSION) First file to be put into PGA memory.
This initialises the V$logmnr_logs table.
and
DBMS_LOGMNR.ADDFILE
adds another logfile to the v$logmnr_logs PGA memory.
Has the same effect as NEW if there are no rows there
presently.
DBMS_LOGMNR.REMOVEFILE
removes a row from v$logmnr_logs.
Example
=======
Include all my online logs for analysis.........
BEGIN
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo03.log',
DBMS_LOGMNR.NEW );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo02.log',
DBMS_LOGMNR.ADDFILE );
dbms_logmnr.add_logfile(
'/export/home/sme81/aholland/database/files/redo01.log',
DBMS_LOGMNR.ADDFILE );
END;
/
Full Path should be required, though an environment variable
is accepted. This is NOT expanded in V$LOGMNR_LOGS.
3. dbms_logmnr.start_logmnr;
This package populates V$logmnr_dictionary, v$logmnr_parameters,
and v$logmnr_contents.
Parameters
==========
1. StartScn Default 0
2. EndScn Default 0,
3. StartTime Default '01-jan-1988'
4. EndTime Default '01-jan-2988'
5. DictFileName Default '',
6. Options Default 0 Debug flag - uninvestigated as yet
A Point to note here is that there are comparisions made between the
SCNs, the times entered, and the range of values in the file. If the SCN
range OR the start/end range are not wholly contained in this log, then
the start_logmnr command will fail with the general error:
ORA-01280 Fatal LogMiner Error.
4. dbms_logmnr.end_logmnr;
This is called with no parameters.
/* THIS IS VERY IMPORTANT FOR SUPPORT */
This procedure MUST be called prior to exiting the session that was
performing the analysis. This is because of the way the PGA is used to
store the dictionary definitions from the dictionary file, and the
V$LOGMNR_CONTENTS output.
If you do not call end_logmnr, you will silently get ORA-00600 [723] ...
on logoff. This OERI is triggered because the PGA is bigger at logoff
than it was at logon, which is considered a space leak. The main problem
from a support perspective is that it is silent, i.e. not signalled back
to the user screen, because by then they have logged off.
The way to spot LogMiner leaks is that the trace file produced by the
OERI 723 will have A PGA heap dumped with many Chunks of type 'Freeable'
With a description of "KRVD:alh"
4. OUTPUT
=========
Effectively, the output from LogMiner is the contents of V$logmnr_contents.
The output is only visible during the life of the session which runs
start_logmnr. This is because all the LogMiner memory is PGA memory, so it
is neither visible to other sessions, nor is it persistent. As the session
logs off, either dbms_logmnr.end_logmnr is run to clear out the PGA, or an
OERI 723 is signalled as described above.
Typically users are going to want to output sql_redo based on queries by
timestamp, segment_name or rowid.
v$logmnr_contents
Name Null? Type
------------------------------- -------- ----
SCN NUMBER
TIMESTAMP DATE
THREAD# NUMBER
LOG_ID NUMBER
XIDUSN NUMBER
XIDSLT NUMBER
XIDSQN NUMBER
RBASQN NUMBER
RBABLK NUMBER
RBABYTE NUMBER
UBAFIL NUMBER
UBABLK NUMBER
UBAREC NUMBER
UBASQN NUMBER
ABS_FILE# NUMBER
REL_FILE# NUMBER
DATA_BLK# NUMBER
DATA_OBJ# NUMBER
DATA_OBJD# NUMBER
SEG_OWNER VARCHAR2(32)
SEG_NAME VARCHAR2(32)
SEG_TYPE VARCHAR2(32)
TABLE_SPACE VARCHAR2(32)
ROW_ID VARCHAR2(19)
SESSION# NUMBER
SERIAL# NUMBER
USERNAME VARCHAR2(32)
ROLLBACK NUMBER
OPERATION VARCHAR2(32)
SQL_REDO VARCHAR2(4000)
SQL_UNDO VARCHAR2(4000)
RS_ID VARCHAR2(32)
SSN NUMBER
CSF NUMBER
INFO VARCHAR2(32)
STATUS NUMBER
PH1_NAME VARCHAR2(32)
PH1_REDO VARCHAR2(4000)
PH1_UNDO VARCHAR2(4000)
PH2_NAME VARCHAR2(32)
PH2_REDO VARCHAR2(4000)
PH2_UNDO VARCHAR2(4000)
PH3_NAME VARCHAR2(32)
PH3_REDO VARCHAR2(4000)
PH3_UNDO VARCHAR2(4000)
PH4_NAME VARCHAR2(32)
PH4_REDO VARCHAR2(4000)
PH4_UNDO VARCHAR2(4000)
PH5_NAME VARCHAR2(32)
PH5_REDO VARCHAR2(4000)
PH5_UNDO VARCHAR2(4000)
SQL> set heading off
SQL> select scn, username, sql_undo from v$logmnr_contents
where segment_name = 'emp';
12134756 scott insert (...) into emp;
12156488 scott delete from emp where empno = ...
12849455 scott update emp set mgr =
This will return the results of an SQL statement without the column
headings. The columns that you are really going to want to query are the
"sql_undo" and "sql_redo" values because they give the transaction details
and syntax.
5. PLACEHOLDERS
===============
In order to allow users to be able to query directly on specific data
values, there are up to five PLACEHOLDERs included at the end of
v$logmnr_contents. When enabled, a user can query on the specific BEFORE and
AFTER values of a specific field, rather than a %LIKE% query against the
SQL_UNDO/REDO fields. This is implemented via an external file called
"logmnr.opt". (See the Supplied Packages manual entry on dbms_logmnr for
further details.) The file must exist in the same directory as the
dictionary file used, and contains the prototype mappings of the PHx fields
to the fields in the table being analyzed.
Example entry
=============
colmap = SCOTT EMP ( EMPNO, 1, ENAME, 2, SAL, 3 );
In the above example, when a redo record is encountered for the SCOTT.EMP
table, the full Statement redo and undo information populates the SQL_REDO
and SQL_UNDO columns respectively, however the PH3_NAME, PH3_REDO and
PH3_UNDO columns will also be populated with 'SAL' ,
respectively,which means that the analyst can query in the form.
SELECT * FROM V$LOGMNR_CONTENTS
WHERE SEG_NAME ='EMP'
AND PH3_NAME='SAL'
AND PH3_REDO=1000000;
The returned PH3_UNDO column would return the value prior to the update.
This enables much more efficient queries to be run against V$LOGMNR_CONTENTS
view, and if, for instance, a CTAS was issued to store a physical copy, the
column can be indexed.
Search Words:
=============
Log Miner
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。