Thursday, August 9, 2012

Reading alert.log in PL\SQL

In this short article I'll show simple block, to read errors in alert.log without having access to server itself. Be carefully, if log file big enough - script going to work forever. In my case - 300mb file parsed in approx 40 sec + 10 sec to output result. However, you may abort it anytime.
Let's find alert.log and create work directory. Also you neeed privilege to work with UTL_FILE:


SQL> select physical_filename,physical_path from v$diag_dir_ext where physical_filename like 'alert%.log'

SQL> create or replace directory TRACE as 'path_where_alert_log_is';



SQL> DECLARE
v_dir varchar2(50) := 'TRACE';
v_name varchar2(30) := 'alert_SID.log';
v_inputfile UTL_FILE.FILE_TYPE;
v_newline varchar2(500);
v_newline_1 varchar2(500);
v_offset number;
BEGIN
dbms_output.enable(10000000);
v_inputfile := UTL_FILE.FOPEN(v_dir,v_name,'r');
LOOP
BEGIN
utl_file.get_line(v_inputfile, v_newline);
IF substr(v_newline,1,4) = 'ORA-' 
 and substr(v_newline,1,9) not in ('ORA-00942','ORA-06550','ORA-12012') /* you may specify some expected errors here */
  then
dbms_output.put_line(v_newline);
utl_file.get_line(v_inputfile,v_newline_1);
dbms_output.put_line(v_newline_1);
end if;
END;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('EXIT!');
UTL_FILE.FCLOSE(v_inputfile);
END;
/

The output would be something like: 
Sun Aug 05 06:00:43 2012
ORA-16957: SQL Analyze time limit interrupt
Sun Aug 05 06:04:44 2012
ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-11024: This external table can only be accessed from within a Data Pump job.





Wednesday, August 8, 2012

Golden Gate - how to add table to existing replication



Somtimes we have a task to add table(s) to existing replication environment. I'd like to demonstrate it clear. In my test example - classic and easy model:
Source DB host - extract + pump
Target DB host - replicat 
But extract, pump not necessarily on the same host, so and replicat may resides not on the source DB host.


1.On source - add supplementall logging for table:

GGSCI (source_host) 1> DBLOGIN, USERID ****, PASSWORD ****
GGSCI (source_host) 2> add trandata SOURCE_TABLE

2.Add table to extract proccess EXTRACT_PRC:

GGSCI (source_host) 3> stop EXTRACT_PRC
GGSCI (source_host) 4> edit param PUMP_PROC
GGSCI (source_host) 5> table SOURCE_TABLE;
GGSCI (source_host) 6> start EXTRACT_PRC

3.Add table to pump proccess PUMP_PRC:

GGSCI (source_host) 7> stop PUMP_PRC
GGSCI (source_host) 8> DBLOGIN, USERID ****, PASSWORD ****
GGSCI (source_host) 10> edit param PUMP_PROC
GGSCI (source_host) 11> table  SOURCE_TABLE;

4. Start  PUMP_PROC , check extract statistics:

GGSCI (source_host) 12> start PUMP_PRC
GGSCI (source_host) 13> stats PUMP_PROC , table  SOURCE_TABLE;


5. Add table to replicat process REPL_PRC (and dont't start it yet):

GGSCI (target_host) 1> stop REPL_PRC
GGSCI (target_host) 2> edit param REPL_PRC
GGSCI (target_host) 3> map  SOURCE_TABLE target TARGET_TABLE;

6. Create empty table on target DB. I have not big table so it's easier to "insert into as select". If U have huge table - it's better use datapump (expdp\impdp) with FLASHBACK SCN option. On the target DB: 


SQL> create table...TARGET_TABLE;

/* to get table metadata U may use dbms_metadata.get_ddl on the source table */


7. Get SCN of replicat stopped time:

GGSCI (target_host) 4> info REPL_PRC

REPLICAT    REPL_PRC     Last Started 2012-08-07 16:34   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:05:04 ago)
Log Read Checkpoint  File ./dirdat/qm000053
                     2012-08-07 16:30:29.880223  RBA 17561526
GGSCI (target_host) 5> exit


[oracle@target_host gg]$ logdump
Oracle GoldenGate Log File Dump Utility
Version 11.1.1.1.2_05 13609969 OGGCORE_11.1.1.1.3_PLATFORMS_120127.1840
Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

Logdump 1 >open ./dirdat/qm000053
Current LogTrail is /gg/dirdat/qm000053

Logdump 2 >ggstoken detail
Logdump 3 >detail on
Logdump 4 >pos 17561526
Reading forward from RBA 17561526
Logdump 5 >n

2012/07/11 11:56:29.880.595 FieldComp            Len    45 RBA 17561526
Name: SOURCE_TABLE
After  Image:                                             Partition 4   G  s
 0000 000d 0000 0009 3230 3437 3136 3834 3500 0400 | ........204716845...
 0600 0000 0235 3200 0600 0e00 0000 0a33 3036 3538 | .....52........30658
 3631 3630 38                                      | 61608
Column     0 (x0000), Len    13 (x000d)
Column     4 (x0004), Len     6 (x0006)
Column     6 (x0006), Len    14 (x000e)

GGS tokens:
TokenID x52 'R' ORAROWID         Info x00  Length   20
 4141 4172 6d69 4141 4141 4174 702b 7a41 4151 0001 | AAArmiAAAAAtp+zAAQ..
TokenID x4c 'L' LOGCSN           Info x00  Length   12
 3837 3739 3436 3234 3830 3332                     | 877946248032
TokenID x36 '6' TRANID           Info x00  Length   12
 3638 2e33 332e 3335 3832 3634                     | 68.33.358264


8. Fill target table. As you can see - we inserts data at the SCN=SCN_WHEN_REPLICAT_STOPED-1. And of course you need to create database link to source DB.

SQL> insert into TARGET_TABLE select * from
SOURCE_TABLE@DB_LINK AS OF SCN  877946248031;
SQL> commit;

9. Start replicat and check stats, make sure that Golden Gate replicates data:
GGSCI (target_host) 4> start REPL_PRC
GGSCI (target_host) 6> info all


Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING      REPL_PRC       00:00:00      00:00:01

GGSCI (target_host) 6> stats  REPL_PRC, table TARGET_TABLE

Sending STATS request to REPLICAT  REPL_PRC  ...

Start of Statistics at 2012-08-07 16:35:38.

Replicating from  SOURCE_TABLE   to  TARGET_TABLE :

*** Total statistics since 2012-08-02 10:08:28 ***
        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Daily statistics since 2012-08-07 00:00:00 ***

        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Hourly statistics since 2012-08-07 16:00:00 ***

        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Latest statistics since 2012-08-02 10:08:28 ***
        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                2.00
        Total discards                               0.00
        Total operations                             4.00

End of Statistics.



10. Optionaly, you may add PK, indexes (not triggers) and so on to the target table.

PS: if DDL of source and target tables are different you should use DEFGEN utility.




Tuesday, August 7, 2012

Happy three friends: VARCHAR2, UNICODE and CYRILLIC.


Many of DBA's know that database migration from single-byte character set (like CL8MSWIN1251, WE8MSWIN1252 etc) to multi-byte character set (AL32UTF8) is a comprehensive task.
In this article I'll talk about 2 very important quations, that I think must be resolved during migration procedure:
 1. Considerations about cyrillic VARCHAR2 data migrations (or any other non latin symbols) from single-byte database to multibyte database.
 2. How works parameter NLS_LENGTH_SEMANTICS in both cases.
The best way to demonstrate how Oracle implicates this - test cases. 
An example assumes that database operates in 2 languages - english and russian.
We want to migrate from 10g CL8MSWIN1251(single-byte) to 11g AL32UTF8(multibyte).
/* examples done in PL\SQL, so I've got an PL/SQL errors like ORA-01461,ORA-06512. In SQLPlus, Datapump, SQL*Loader U will see another errors like "ORA-12899: value too large for column " and so on. */



10g test with CL8MSWIN1251:
SQL> select * from nls_database_parameters where parameter in ('NLS_RDBMS_VERSION','NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET','NLS_LANGUAGE','NLS_TERRITORY','NLS_LENGTH_SEMANTICS');

PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CHARACTERSET               CL8MSWIN1251
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              10.2.0.4.0

 6 rows selected


SQL> create table TEST (b varchar2(4000));

Table created

SQL> declare
  2    a varchar(4000) := 'я'; /* я - russian unique letter */
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) ||'..inserted');
 11  end;
 12  /

PL/SQL procedure successfully completed

So, default NLS_LENGTH_SEMANTICS is BYTE. Lets change it.

SQL> alter session set NLS_LENGTH_SEMANTICS='CHAR';

Session altered

SQL> declare
  2    a varchar(4000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) ||'..inserted');
 11  end;
 12  /

PL/SQL procedure successfully completed


SQL> select length(b) from test;

 LENGTH(B)
----------
      4000
      4000

As we see, since CL8MSWIN1251 encodes any cyrillic symbol in 1 byte - we have no limits to insert 4000 chars( = bytes) into single-byte database, despite of NLS_LENGTH_SEMANTICS = CHAR or BYTE.



11g test with AL32UTF8:

SQL> select * from nls_database_parameters where parameter in ('NLS_RDBMS_VERSION','NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET','NLS_LANGUAGE','NLS_TERRITORY','NLS_LENGTH_SEMANTICS');

PARAMETER                      VALUE
------------------------------ --------------------------------------------------------------------------------
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CHARACTERSET               AL32UTF8
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              11.2.0.3.0

6 rows selected

To see how  NLS_LENGTH_SEMANTICS works:


SQL> create table TEST (b varchar2(4000));

Table created


SQL> declare
  2    a varchar(4000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 7

Variable 'a' - exceeds 4000 byte limit. Fortunately for VARCHAR2 in PL\SQL (in contrast with SQL) - the hard limit is 32767 byte. Also lets change commit frequency to see how many insert are done.



SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      insert into test (b) values (a);
  7      commit;
  8      a := a || 'я';
  9       end loop;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: at line 9


SQL> select max(LENGTH(b)) from test;


MAX(LENGTH(B))

--------------

          2000

Now we see that only 2000 symbols inserted. In UNICODE, cyrillic letters encoded by 2 bytes,  that is 2000*2 = 4000byte limit. Now what if we change table column to VARCHAR2 CHAR..? (or change NLS_LENGTH_SEMANTICS to CHAR) ?


SQL> drop table test;
Table dropped

SQL> create table test(b varchar2(4000 CHAR));
Table created

It is not necessary but still:

SQL> alter session set NLS_LENGTH_SEMANTICS='CHAR';
Session altered


SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /

ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: at line 9


Wow, we still can't insert 4000 symbols? wtf? Lets look how many inserts:


SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      insert into test (b) values (a);
  7      commit;
  8      a := a || 'я';
  9       end loop;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


SQL> select max(LENGTH(b)) from test;
MAX(LENGTH(B))
--------------
          2000

Still 2000 only. What about non cyrillic symbols?


SQL> truncate table test;
Table truncated

SQL> declare
  2    a varchar(8000) := 'z';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'z';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /
PL/SQL procedure successfully completed

SQL> select max(LENGTH(b)) from test;
MAX(LENGTH(B))
--------------
          4000


Obviously, latin symbols encodes in 1 byte, and cyrillic in 2 
bytes. I remind, that session and column semantic is still CHAR...so wtf? The answer is in the: 


Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]

where we read:

You may have defined the column as 4000 CHAR, but the "limit behind the scene" is 4000 BYTES 

Nice. I'm happy that behind this scene we didn't saw some wall