Saturday, December 29, 2012

Oratop-light,simple DB monitoring tool for Linux

This tool probably useful for system administrators, or like synopsis overview for DBA. This Oracle tool is really easy-to-use, very lightweight and simple for cursory Database performance overview. It can be downloaded from MOS [ID 1500864.1]. This tool made for Linux(x32 and x64) and 11gR2 only. Just run it authorizing with some privileged user:
[oracle@dbtest oratop]$ ./oratop 




oratop - utility for near real-time monitoring of databases, RAC and Single Instance [ID 1500864.1]

Active Data Guard SLA pursuit

When your business users runs read only report from ADG (physical standby database opened READ ONLY) there may unacceptable lag between primary and standby databases.Since 11.2 there is a nice view to see this lag.

SQL> SELECT * FROM V$STANDBY_EVENT_HISTOGRAM WHERE NAME = 'apply lag' AND COUNT > 0 and last_time_updated like '%12/29/2012%';

NAME         TIME UNIT          COUNT LAST_TIME_UPDATED
------------ ---- ---------- -------- --------------------
apply lag       0 seconds     3780291 12/29/2012 09:54:04
apply lag       1 seconds      319929 12/29/2012 09:52:32
apply lag       2 seconds        6509 12/29/2012 07:10:20
apply lag       3 seconds        3868 12/29/2012 05:52:06
apply lag       4 seconds        2956 12/29/2012 05:52:07
apply lag       5 seconds        2223 12/29/2012 05:46:56
apply lag       6 seconds        1344 12/29/2012 05:46:57
apply lag       7 seconds        1118 12/29/2012 05:46:58
apply lag       8 seconds        1144 12/29/2012 05:33:56
apply lag       9 seconds        1094 12/29/2012 05:33:57
apply lag      10 seconds        1098 12/29/2012 05:33:58
apply lag      11 seconds        1001 12/29/2012 05:33:59
apply lag       2 minutes        6363 12/29/2012 02:34:41
apply lag       3 minutes        1786 12/29/2012 02:35:54

61 rows selected

So, in my environment 2-3 minutes lag - unacceptable. Of course the best way - is tuning apply speed, but it's not always possible, for network limits for example. But we may guarantee users some SLA, via this kind of trigger,violation of which will lead to "ORA-03172: STANDBY_MAX_DATA_DELAY of 60 seconds exceeded". Run this trigger on primary, then it will by applied on standby. Here is simple Oracle example:

SQL> CREATE OR REPLACE TRIGGER LOGON_TO_STANDBY_TRIGGER
AFTER LOGON ON some.schema
BEGIN
IF (SYS_CONTEXT('USERENV','DATABASE_ROLE')
IN ('PHYSICAL STANDBY'))
THEN execute immediate 'ALTER SESSION SET STANDBY_MAX_DATA_DELAY=60;'
END IF;
END;



Friday, December 28, 2012

Golden Gate plugin for EM Cloud Control 12c

Today I'd like to tell how to monitor Golden Gate infrastructure in EM Cloud Control 12c.
Requirements:
- Oracle GoldenGate release 11.2.1.0.1 or later ( I used  version 11.2.1.0.3)
- Oracle EM Cloud Control 12c Bundle Patch 1 (12.1.0.1) or later ( I used  Bundle Patch 2, i.e. 12.1.0.2)
- Any release of JDK 1.6 (not JRE). I used latest for today 1.6.0_38. Check java version:

[oracle@dbtest ~]$ $JAVA_HOME/bin/java -version
java version "1.6.0_38"
Java(TM) SE Runtime Environment (build 1.6.0_38-b05)
Java HotSpot(TM) 64-Bit Server VM (build 20.13-b02, mixed mode)

- Any OS where GG can run except : HP NonStop, IBM iSeries, and IBM z/OS. I used OEL Linux 5.6
- I assume EM CC12c and agents 12c are deployed on all required targets.

Steps:
- Configure plug-in and java
- Download plugin via EM CC12c. Setup->Extensibility->Plug-ins->Middleware->Oracle GoldenGate 
- Deploy plugin on Management Service.(right click, it will freeze EM for few minutes and restart oms)
- Deploy plugin on all EM Agents 12c (right click)
For some security restriction reasons, you may want to download plugin from https://edelivery.oracle.com/ and manualy import plug-in to OMS.
- Update java tzdata. Theres is an issue in EM 12c.I have no idea why 12.1.0.2 packed with ancient JDK1.6.0_24, where there is an old time offset for Moscow +3. Current offset is +4. So, if You want to have correct time in EM and no time related issues - you should update java tzdata in all 12c agents and OMS. Download timezone updater tool with latest tzdata file included here: http://www.oracle.com/technetwork/java/javase/tzupdater-readme-136440.html
 As workaround you may run java processes with some key like -Duser.timezone=GMT+4 (correct for you country), or\and correct $AGENT_HOME/sysman/config/emd.properties for agents, and choose wrong country - in EM. But i think it's not best practice.
- Install JDK on every planned to observe host target (http://www.oracle.com/technetwork/java/javase/downloads/jdk6u38-downloads-1877406.html)

- Configure Golden Gate targets
  - add ENABLEMONITORING to GLOBALS parameter file an save it:

[oracle@test ~]$ cd $GG_HOME
[oracle@dbtest gg]$ ggsci
GGSCI (test.local) 1> edit params ./GLOBALS
CHECKPOINTTABLE GGS_CHECKPOINT
GGSCHEMA gguser
ENABLEMONITORING


- configure password for java agent

[oracle@dbtest gg]$ ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent:

- configure jagent parameters in $GG_HOME/cfg/Config.properties

agent.type.enabled=OEM
jagent.rmi.port=5559
jagent.host=test.local
jagent.jmx.port=5555
jagent.username=user_name


-  create jagent datastore( Berkley DB) to store monitoring data and restart manager process:

[oracle@test ~]$ cd $GG_HOME
[oracle@dbtest gg]$ ggsci
GGSCI (dbtest.local) 1> create datastore
GGSCI (dbtest.local) 2> stop mgr
GGSCI (dbtest.local) 3> start mgr
GGSCI (dbtest.local) 4> start jagent


GGSCI (dbtest.local) 1> info all
Program     Status      Group       Lag at Chkpt  Time Since Chkpt
MANAGER     RUNNING
JAGENT      RUNNING
REPLICAT    RUNNING     RQIWIE      00:00:00      00:00:01
REPLICAT    RUNNING     RQIWIM      00:00:00      00:00:00
REPLICAT    RUNNING     RQIWITX     00:00:00      00:00:02

- Add Golden Gate instance to monitoring:

 -Setup->Add Target->Configure Auto Discovery->Edit Parameters.  Add JAgent Username,password and Jagent RMI Port=5559,which was previously specified in config.properties file on targets.
- Add target Setup->Add Target->Auto Discovery Results-> Choose Golden Gate and promote it.
- Done. You can see pictures like this:


Oracle Install Guide
Oracle® Enterprise Manager Oracle GoldenGate System Monitoring Plug-In Installation GuideRelease 12.1.0.1.1




Thursday, August 9, 2012

Reading alert.log in PL\SQL

In this short article I'll show simple block, to read errors in alert.log without having access to server itself. Be carefully, if log file big enough - script going to work forever. In my case - 300mb file parsed in approx 40 sec + 10 sec to output result. However, you may abort it anytime.
Let's find alert.log and create work directory. Also you neeed privilege to work with UTL_FILE:


SQL> select physical_filename,physical_path from v$diag_dir_ext where physical_filename like 'alert%.log'

SQL> create or replace directory TRACE as 'path_where_alert_log_is';



SQL> DECLARE
v_dir varchar2(50) := 'TRACE';
v_name varchar2(30) := 'alert_SID.log';
v_inputfile UTL_FILE.FILE_TYPE;
v_newline varchar2(500);
v_newline_1 varchar2(500);
v_offset number;
BEGIN
dbms_output.enable(10000000);
v_inputfile := UTL_FILE.FOPEN(v_dir,v_name,'r');
LOOP
BEGIN
utl_file.get_line(v_inputfile, v_newline);
IF substr(v_newline,1,4) = 'ORA-' 
 and substr(v_newline,1,9) not in ('ORA-00942','ORA-06550','ORA-12012') /* you may specify some expected errors here */
  then
dbms_output.put_line(v_newline);
utl_file.get_line(v_inputfile,v_newline_1);
dbms_output.put_line(v_newline_1);
end if;
END;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('EXIT!');
UTL_FILE.FCLOSE(v_inputfile);
END;
/

The output would be something like: 
Sun Aug 05 06:00:43 2012
ORA-16957: SQL Analyze time limit interrupt
Sun Aug 05 06:04:44 2012
ORA-20011: Approximate NDV failed: ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-11024: This external table can only be accessed from within a Data Pump job.





Wednesday, August 8, 2012

Golden Gate - how to add table to existing replication



Somtimes we have a task to add table(s) to existing replication environment. I'd like to demonstrate it clear. In my test example - classic and easy model:
Source DB host - extract + pump
Target DB host - replicat 
But extract, pump not necessarily on the same host, so and replicat may resides not on the source DB host.


1.On source - add supplementall logging for table:

GGSCI (source_host) 1> DBLOGIN, USERID ****, PASSWORD ****
GGSCI (source_host) 2> add trandata SOURCE_TABLE

2.Add table to extract proccess EXTRACT_PRC:

GGSCI (source_host) 3> stop EXTRACT_PRC
GGSCI (source_host) 4> edit param PUMP_PROC
GGSCI (source_host) 5> table SOURCE_TABLE;
GGSCI (source_host) 6> start EXTRACT_PRC

3.Add table to pump proccess PUMP_PRC:

GGSCI (source_host) 7> stop PUMP_PRC
GGSCI (source_host) 8> DBLOGIN, USERID ****, PASSWORD ****
GGSCI (source_host) 10> edit param PUMP_PROC
GGSCI (source_host) 11> table  SOURCE_TABLE;

4. Start  PUMP_PROC , check extract statistics:

GGSCI (source_host) 12> start PUMP_PRC
GGSCI (source_host) 13> stats PUMP_PROC , table  SOURCE_TABLE;


5. Add table to replicat process REPL_PRC (and dont't start it yet):

GGSCI (target_host) 1> stop REPL_PRC
GGSCI (target_host) 2> edit param REPL_PRC
GGSCI (target_host) 3> map  SOURCE_TABLE target TARGET_TABLE;

6. Create empty table on target DB. I have not big table so it's easier to "insert into as select". If U have huge table - it's better use datapump (expdp\impdp) with FLASHBACK SCN option. On the target DB: 


SQL> create table...TARGET_TABLE;

/* to get table metadata U may use dbms_metadata.get_ddl on the source table */


7. Get SCN of replicat stopped time:

GGSCI (target_host) 4> info REPL_PRC

REPLICAT    REPL_PRC     Last Started 2012-08-07 16:34   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:05:04 ago)
Log Read Checkpoint  File ./dirdat/qm000053
                     2012-08-07 16:30:29.880223  RBA 17561526
GGSCI (target_host) 5> exit


[oracle@target_host gg]$ logdump
Oracle GoldenGate Log File Dump Utility
Version 11.1.1.1.2_05 13609969 OGGCORE_11.1.1.1.3_PLATFORMS_120127.1840
Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

Logdump 1 >open ./dirdat/qm000053
Current LogTrail is /gg/dirdat/qm000053

Logdump 2 >ggstoken detail
Logdump 3 >detail on
Logdump 4 >pos 17561526
Reading forward from RBA 17561526
Logdump 5 >n

2012/07/11 11:56:29.880.595 FieldComp            Len    45 RBA 17561526
Name: SOURCE_TABLE
After  Image:                                             Partition 4   G  s
 0000 000d 0000 0009 3230 3437 3136 3834 3500 0400 | ........204716845...
 0600 0000 0235 3200 0600 0e00 0000 0a33 3036 3538 | .....52........30658
 3631 3630 38                                      | 61608
Column     0 (x0000), Len    13 (x000d)
Column     4 (x0004), Len     6 (x0006)
Column     6 (x0006), Len    14 (x000e)

GGS tokens:
TokenID x52 'R' ORAROWID         Info x00  Length   20
 4141 4172 6d69 4141 4141 4174 702b 7a41 4151 0001 | AAArmiAAAAAtp+zAAQ..
TokenID x4c 'L' LOGCSN           Info x00  Length   12
 3837 3739 3436 3234 3830 3332                     | 877946248032
TokenID x36 '6' TRANID           Info x00  Length   12
 3638 2e33 332e 3335 3832 3634                     | 68.33.358264


8. Fill target table. As you can see - we inserts data at the SCN=SCN_WHEN_REPLICAT_STOPED-1. And of course you need to create database link to source DB.

SQL> insert into TARGET_TABLE select * from
SOURCE_TABLE@DB_LINK AS OF SCN  877946248031;
SQL> commit;

9. Start replicat and check stats, make sure that Golden Gate replicates data:
GGSCI (target_host) 4> start REPL_PRC
GGSCI (target_host) 6> info all


Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING      REPL_PRC       00:00:00      00:00:01

GGSCI (target_host) 6> stats  REPL_PRC, table TARGET_TABLE

Sending STATS request to REPLICAT  REPL_PRC  ...

Start of Statistics at 2012-08-07 16:35:38.

Replicating from  SOURCE_TABLE   to  TARGET_TABLE :

*** Total statistics since 2012-08-02 10:08:28 ***
        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Daily statistics since 2012-08-07 00:00:00 ***

        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Hourly statistics since 2012-08-07 16:00:00 ***

        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                             4.00

*** Latest statistics since 2012-08-02 10:08:28 ***
        Total inserts                                2.00
        Total updates                                2.00
        Total deletes                                2.00
        Total discards                               0.00
        Total operations                             4.00

End of Statistics.



10. Optionaly, you may add PK, indexes (not triggers) and so on to the target table.

PS: if DDL of source and target tables are different you should use DEFGEN utility.




Tuesday, August 7, 2012

Happy three friends: VARCHAR2, UNICODE and CYRILLIC.


Many of DBA's know that database migration from single-byte character set (like CL8MSWIN1251, WE8MSWIN1252 etc) to multi-byte character set (AL32UTF8) is a comprehensive task.
In this article I'll talk about 2 very important quations, that I think must be resolved during migration procedure:
 1. Considerations about cyrillic VARCHAR2 data migrations (or any other non latin symbols) from single-byte database to multibyte database.
 2. How works parameter NLS_LENGTH_SEMANTICS in both cases.
The best way to demonstrate how Oracle implicates this - test cases. 
An example assumes that database operates in 2 languages - english and russian.
We want to migrate from 10g CL8MSWIN1251(single-byte) to 11g AL32UTF8(multibyte).
/* examples done in PL\SQL, so I've got an PL/SQL errors like ORA-01461,ORA-06512. In SQLPlus, Datapump, SQL*Loader U will see another errors like "ORA-12899: value too large for column " and so on. */



10g test with CL8MSWIN1251:
SQL> select * from nls_database_parameters where parameter in ('NLS_RDBMS_VERSION','NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET','NLS_LANGUAGE','NLS_TERRITORY','NLS_LENGTH_SEMANTICS');

PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CHARACTERSET               CL8MSWIN1251
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              10.2.0.4.0

 6 rows selected


SQL> create table TEST (b varchar2(4000));

Table created

SQL> declare
  2    a varchar(4000) := 'я'; /* я - russian unique letter */
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) ||'..inserted');
 11  end;
 12  /

PL/SQL procedure successfully completed

So, default NLS_LENGTH_SEMANTICS is BYTE. Lets change it.

SQL> alter session set NLS_LENGTH_SEMANTICS='CHAR';

Session altered

SQL> declare
  2    a varchar(4000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) ||'..inserted');
 11  end;
 12  /

PL/SQL procedure successfully completed


SQL> select length(b) from test;

 LENGTH(B)
----------
      4000
      4000

As we see, since CL8MSWIN1251 encodes any cyrillic symbol in 1 byte - we have no limits to insert 4000 chars( = bytes) into single-byte database, despite of NLS_LENGTH_SEMANTICS = CHAR or BYTE.



11g test with AL32UTF8:

SQL> select * from nls_database_parameters where parameter in ('NLS_RDBMS_VERSION','NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET','NLS_LANGUAGE','NLS_TERRITORY','NLS_LENGTH_SEMANTICS');

PARAMETER                      VALUE
------------------------------ --------------------------------------------------------------------------------
NLS_LANGUAGE                   AMERICAN
NLS_TERRITORY                  AMERICA
NLS_CHARACTERSET               AL32UTF8
NLS_LENGTH_SEMANTICS           BYTE
NLS_NCHAR_CHARACTERSET         AL16UTF16
NLS_RDBMS_VERSION              11.2.0.3.0

6 rows selected

To see how  NLS_LENGTH_SEMANTICS works:


SQL> create table TEST (b varchar2(4000));

Table created


SQL> declare
  2    a varchar(4000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 7

Variable 'a' - exceeds 4000 byte limit. Fortunately for VARCHAR2 in PL\SQL (in contrast with SQL) - the hard limit is 32767 byte. Also lets change commit frequency to see how many insert are done.



SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      insert into test (b) values (a);
  7      commit;
  8      a := a || 'я';
  9       end loop;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: at line 9


SQL> select max(LENGTH(b)) from test;


MAX(LENGTH(B))

--------------

          2000

Now we see that only 2000 symbols inserted. In UNICODE, cyrillic letters encoded by 2 bytes,  that is 2000*2 = 4000byte limit. Now what if we change table column to VARCHAR2 CHAR..? (or change NLS_LENGTH_SEMANTICS to CHAR) ?


SQL> drop table test;
Table dropped

SQL> create table test(b varchar2(4000 CHAR));
Table created

It is not necessary but still:

SQL> alter session set NLS_LENGTH_SEMANTICS='CHAR';
Session altered


SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'я';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /

ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: at line 9


Wow, we still can't insert 4000 symbols? wtf? Lets look how many inserts:


SQL> declare
  2    a varchar(8000) := 'я';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      insert into test (b) values (a);
  7      commit;
  8      a := a || 'я';
  9       end loop;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /


SQL> select max(LENGTH(b)) from test;
MAX(LENGTH(B))
--------------
          2000

Still 2000 only. What about non cyrillic symbols?


SQL> truncate table test;
Table truncated

SQL> declare
  2    a varchar(8000) := 'z';
  3  begin
  4    dbms_output.enable(4000);
  5    while length(a) < 4000 loop
  6      a := a || 'z';
  7    end loop;
  8    insert into test (b) values (a);
  9    commit;
 10    dbms_output.put_line(length(a) || '..inserted');
 11  end;
 12  /
PL/SQL procedure successfully completed

SQL> select max(LENGTH(b)) from test;
MAX(LENGTH(B))
--------------
          4000


Obviously, latin symbols encodes in 1 byte, and cyrillic in 2 
bytes. I remind, that session and column semantic is still CHAR...so wtf? The answer is in the: 


Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]

where we read:

You may have defined the column as 4000 CHAR, but the "limit behind the scene" is 4000 BYTES 

Nice. I'm happy that behind this scene we didn't saw some wall


Tuesday, May 15, 2012

Friday, May 11, 2012