Tuesday, September 20, 2011

Benchmark wars and Number Games

What differentiates a seasoned performance engineer from a
developer(however senior he/she ) is that the seasoned performance
engineer doesnt have to rerun several test runs periodically of more or
less same load nature/workloads to identify bottlenecks in code and can
spot issues in design that can cause performance and scalability issues
in long run much more easily with careful/concentrated fewer test runs
saving lot of heat in arguments(saving fuel and energy) and most
importantly much earlier in dev cycles helping everyone.

Sounds nice to have such folks on board! But wait a minute all this is
good only if the Dev folks listen to the voices and in a predominantly
developer dominated organizations it simply doesnt happen due to various
reasons and even the seasoned performance engineer goes through tough
times of re-running tests again and again to spot the same old
things/cries which he/she would have already cried on top of his/her
voice a thousand times over the roof.

Also much more valid is the quality of ideas and tools he/she uses to
arrive at results quickly can often surprise even the most senior dev
folks that they initially try to initimidate/resist such changes.

No offense meant to any developer folks reading this!

Friday, August 12, 2011

Impatient Patient and a Helpless Doctor Syndrome

Working to help in performance and stability in agile development
environments/developer eco systems poses some interesting challenges for
performance engineering folks.

I get reminded of what i call a "Impatient Patient and Helpless Doctor"
syndrome in such eco/environments.

The development staff mostly very senior and already given you an
impression of they know/understand their code/systems they develop well
enough that they want to be helped only in understanding what they need
or think need to be done.

This is like a Impatient Patient but very learned/educated at same time
requesting the doctor to get him/her checked and get rid of all "XYZ"
scalability and performance issues. What follows is the doctor initially
goes into a Helpless mode as the Patient itself suggests all
methods/tests be performed on him/her and often indicative of the
cure/medicine for the assumed illness.

Not to say that only a Successful doctor helps the patient to come out
of such mental concoctions and manages to truly help the patient in the
end winning their confidence. But this doesnt come that easily and
chances that the doctor though clever enough can become crippled and
helpless to the extent of losing interest in the subject can very well
happen.

Lesson learnt is : The patient should allow the doctor to do his/her job
without imposing too many blockades and doctor also give some mental
peace to the mentally agitated patient along with cures. Needless to say that the patient need to be cured really but providing mental peace to patient helps.


Sunday, October 03, 2010

Software modeling and Issues

Understanding the problem/activity to solve more deeply and knowledge
of various architectural design patterns helps to evolve a
reliable,performant and scalable solution. Applying same design pattern
or getting into a fixed/rigid way of solving all problems is a major
issue and cripple the functioning of software. eg: Presentation,
compute,data oriented problems are all unique and one cannot apply a
same pattern to all these problems.
Also One need to understand is no matter how hard you analyze and design
in development or testing there would be some disruputive innovation
coming along which may force you to rethink your design if not in near
future, These disruptive innovations are inevitable and one cannot
accomodate for them in design but you can anticipate minor changes and
provide "knobs" in design to turn on/off certain minor but yet can
change performance/scalability to some extent.

It is not surprising atleast for me having spent last 10+years in
software testing on large systems to see that most of the java apps
still suffer from Concurrency and GC issues.
Lot of research is going into Concurrency area with functional
constructs similar to erlang,clojure,scala etctype coming into java 7
and more into atomic locking constructs,scaling across cores.
This is still a growing area with Software/Hardware Transactional Memory
, Message passing,shared state with more controls etc all the techniques
explored in terms of code clarity, time/space trade offs etc . Even
hardware/software co-design like that of Oracle's Exalogic,Azul compute
appliances etc)
Another area is embracing with some sort of predictive/limited or
avoiding as much as GC as possible.

I had been involved in Software testing from stability,performance,
scalability aspects right from 2000 starting with mainframes(COBOL,
CICS) to latest apps running on JVM using various design patterns.
Manytimes i tell people on things to watch out for or my opinions which
they do initially neglect and later on come back to me to say "Oh yes
you said that sometime back...i didnt get it..."

Friday, October 01, 2010

Concurrency and scaling choices

Let me make it clear for whoever is reading my blog the following:

"I neither speak for my company i work for
nor my company speaks for me.
All the ideas,thoughts and impressions on the tools i list in my blog
are my own "

JSR 166 - concurrency utilities apply for JS2E 1.5 onwards whereas JSR
237 is an attempt to take it to J2EE 1.4 onwards at the container level.
JSR 173 - Streaming xml parser aka pull parser is something that i
find needed for certain situations and is not thought of most by people
when comes to XML parsing.
(Roguewave and VTD-XML are other choices i hear from my friends but i
dont know much that i want to think about sometime later)

JSR 107 - JCache/caching in java is another big area that interests
me with many technologies emerging to support this area,
( Oracle's Coherence,JGroups with Infinispan, Gigaspaces,
Terracoata,GridGain, Open source Hazelcast are some of the useful ones
if one is interested to explore this area each one
with varying capabilities and usecases)

Wednesday, September 22, 2010

JSR 166 and JSR 173

For some strange reason i found myself needing to know the design
decisions on the 2 JSR s 166 and 173.
If time permits would cover a blog on why i feel these two run in my
mind of late and the importance of them with regards to
performance&scalability.

On a side note, i see lot of people often either use wrong API or
reinvent the wheel possibly they dont know the merits or unable to use
the standard tested APIs/utilities
and sometimes stray into disasters both correctness and performance of
the intended functionality.

Saturday, September 11, 2010

Most often Stack/runtime does it better than you

Throughout my interaction with many developers/experienced people i
sometimes find it hard to explain/convince people that the code written
by runtime is cheaper and well thought of than trying to do the same in
application layer.

Most of the tracking/diagnostic are well handled by the technology stack
or the runtime. eg; Your OS,DB or your VM can do much cheaply and safely
the tracking and diagnostic capabilities for your application running on
top of them. So there is no reason why you one may need to do that same
in your application code and you can focus on your business/logic of the
application. Yet one may have to ocassionally use the underlying
diagnostic facilities/external API exposed to add some context which may
be the only thing i see missing in the underlying diagnostic
capabilities exposed by the stack/runtime for you.

Having said that things are different for each stack/runtime today and
the extent to which you can use the underlying diagnostic may vary and
rarely you may see benefit in writing some code on your own for doing a
diagnostic tracking/control mechanism

Friday, September 10, 2010

Do surrogates really suck?

Do Surrogates really suck in performance profiling?

I was going through an article on Performance of applications by a noted
Oracle expert where it mentioned that surrogates suck when it

comes to profiling applications for response time. Let me make it very
clear that i dont dispute the person here but to drive a point that
While response time is ideally the best to measure and understand the
profile of an application/process it may not always be possible to do
that considering overheads. In such cases a careful choice of the
surrogate measure depending on the technology should help in
understanding the profile. So the answer is yes and no.

For me manytimes the thoughts and ideas which i bring in from the
various fields of exposure help me in understanding the performance of a
system or even better/cleaner ways.

Tuesday, September 07, 2010

Model is thy code

I rarely get time to write on my blog over the last few years. But when
i do there are few good motivations/forces which drive me to write

on things which i consider are important.

After over 10 years in IT I am thrilled to see that i have been always
working on projects which have stability&performance as one of the

key goals/objectives( if not the most important) and was lucky enough to
put my hands on various technologies starting from

IBM Mainframes(MVS/OS 390,JCL,COBOL,CICS.REXX and DB2 to some extent),
UNIX C,C++ saga,
VB,ASP,HTML*, Web technologies
.NET,J2EE systems,
Teradata and ofcourse Oracle technologies


Some of the important projects that i have worked include a one for a
major Telco(for its IT LOB) in UK in their core Intercarrier-billing

and Provisioning systems/OSS-big Legacy conversion project. I had also
been a witness to SOA/WebServices tech(as of 2004) that i happened

to come across in one of the projects which though didnt materialize for
the customer.


It has been a mix of all round exposure in IT and Business side that i
have been witnessing all through these years and thanks to my Alma

Mater consulting firm that i got chance to put my hands on in all these
areas.All along these years i keep getting a more deeper

understanding of the trio- People, Processes and technologies in
projects and how they shape up and interact with each other to deliver.
Often not surprisingly it is the people who play a positive/negative
dominant role resulting in successful/failed projects.
It is a nightmare to imagine working in a large conversion of a big
system X where several teams/stakeholders are involved.
The problem is not in technology/tools or the interaction between
systems but the complexity in interaction with people of various systems

that the project depends on.


Back to the topic of this blog post, "Model is thy code" . I had been
saying this for long time and in fact is one of my favorite

quotes(Ofcourse i dont claim/do not know who may have coined it first)
which i coined way back in 2003 i.e "Model is the code" and most of

the libraries that i come across miss this pivital concept. If your API
is not based on a solid model then you are sure to see your

software using it develop usability/scalability issues in long run.
Correctness/Completeness is a important 'C' factor along with usual

other C's that pundits claim for performance&scalability
(C-Concurrency/parallelism in today's multi-core/proc world,C-Contention
and

C-Coherency).Often It just simply stops at correctness and it doesnt
make sense to move forward.
Though it may not be possible for someone interested in other 'C's to
always ensure 'C-Correctness' factor atleast one should take a step

and think on it for it is posssible that your other 'C's may get
affected or doesnt make sense to proceed ahead.

Looking at the crap/nonsense that is out in blogs/sites on Software
performance & scalability i can only feel pity for the sheer lack of
understanding on the subject and the dangerous mix of false claims/ideas
of those who havent had any hands on on the technologies involved,

Enough of my time has been spilt on Software testing& performance.It is
time i take initiative to move onto something afresh that keep me going,

Saturday, December 13, 2008

Making a one-line description mandatory as part of SQL syntax

Has anyone thought about making a comment/a one-line or optionally
multiple lines describing what rows
a sql tries to fetch as a mandatory sql syntax.?
I am sure people might have thought about this but wonder if it's
feasible...

Tuesday, November 18, 2008

KYD Factor -It keeps coming back again and again

I have faced many times in oracle projects trying to do one thing esp when i am asked to
maintain, enhance or test the performance of backend code written in oracle sql,plsql:
        "The KYD -Know Your Data Factor" - Understand your data in tables as well as structures involved
Sometime back David Aldridge mentioned this as one of the points in writing good sqls,
Probably i would like to keep a copy of this whitepaper from Dan Tow by my side always..

How much time one saves by just understanding this in first place, rather than beating around the bush trying to refactor sqls
which may or may not give you the optimal solution in the long run.

(Quoted directly from above Dan Tow link:
"
Code What You Know

To understand the database design well enough to write functionally-correct code likely to perform well from the start, you should be able to answer a series of questions with confidence:

  • What set of entities does each table represent?
  • What is the complete primary key to each table?
  • What set of entities does each view represent?
  • What is the virtual primary key of each view?
  • Roughly how many rows in production will there be in each table or view? 
It is surprising how often owners of broken code cannot answer these very basic questions, but it is hardly a surprise that the result, without this understanding, is broken code!"
"


 

Thursday, June 26, 2008

Transactional processing in Messaging Systems

As With any other db/Persistence API , queueing/messaging API must(May
be It should be re-emphasized) support transactional API
and no wonder Oracle AQ supports this.
Set based processing , Enqueueing and Dequeueing array of messages, XML
based payloads
possible with AQ.
While Oracle AQ supports Transactional API in Persistent Queues which it
conveniently leverages the Oracle Database Tables (Queue Tables,IOTs to
be precise)
it was not supporting transactional API in Buffered messages.
EnQueueing and DeQueueing into Persistent Messages have the same
overhead as of doing Select,Insert and/or delete into IOT tables as the
case maybe.
Buffered Messages dont have this overhead with the downside of retention.

Tuesday, June 10, 2008

Question on "nested txns" vs "autonomous txns"]

For people who have requirements to code a Autonomous txn in Oracle which is called in general as a "Nested Top level (sub)transaction",
Things to watchout are esp in the context of temp tables,
Autonomous transactions just happen in another transaction but within the same session of parent/main txn and so your temp tables/gtt s created in main txn should be accessible from within autonomous txn except for the uncommitted changes done to it in main txn immediately (Infact if you cannot even populate a same gtt/temp table which has been already populated in main txn, you would get error in autonomous txn )

I did some study and found that use of autonmous txn is 99% for auditing/logging purposes and as Thomas Kyte(asktom) says any other use of them is sure a problem in design/code.You would need to really check your logic for such a use case before you decide some txn as a autonomous txn.


I would summarise the following for autonomous and nested txns:

Autonomous Txn:
  • Autonomous txn is just a different independent transaction from the main/parent txn but within the same session . Hence does not share transactional resources as that of main/parent txn.
  • Cannot see uncommitted changes in main/parent txn for consistency reasons. (Note: Consistency is always at transactional level and not at session level)
  • After a commit in autonomous txn you return immediately to the transactional context of main/parent transaction.i.e after a commit in autonomous txn you are back to parent txn.
  • Autonomous txn since they operate within the same session they can access the gtt/temp tables but cannot see data in them already populated by main/parent txn. Infact they get error when they try to populate them if done already in parent txn. But they can populate them if not already populated in parent/main txn which would be a rare case.
  • Changes made in autonomous txn are visible to parent/main txn based on isolation level set by main/parent transaction using the "set transaction isolation level .." statement in pl/sql. Oracle,by default makes committed changes visible(i,e "read committed" isolation level is default for any DML statement level) but "serializable" isolation level can be set for a transaction level using "set transaction isolation level serializable" statement for multi-statement read consistency. i.e the main/parent transaction would not see any committed changes made later by other transactions including autonomous txn which is also a different txn. Hence if your parent/main txn starts with a set transaction ..serializable it wont see any committed changes done in the autonomous child txn of it. Normally you would use this "serializable" isolation level for a short time OLTP txns and most often go with default "read committed" statement level.
  • Exceptions raised from within autonomous txn get rolled back to transaction level and not to statement level.
  • Use of Autonomous txn in XA/distributed env was not supported in 9i and not sure of complications in later releases.
  • Autonomous txn use cases are very rare and 99.99% they are for logging/auditing purposes.

Nested Txn:
  • For Oracle nested txn simple mean a transaction done from within a parent/main txn as i explained already.You can only set savepoints and rollback to them as you know already. You can use JDBC 3.0 standard savepoints interface for this in java or use pl/sql savepoints to rollback incrementally as you said.
  • Transaction isolation levels can also be set using jdbc APIs. Oracle as said above only support default "read committed" and "serialization" levels.
  • In Oracle Nested txns always see uncommitted changes in parent/main txn and changes in nested child txns are also always visible for parent txn.

Please see autonomous and nested transaction example.

Autonomous txn Example (In PL/SQL)
create table audit_test
(
name varchar2(20),
join_date date,
identifier varchar2(200),
log_id number
)
/
Truncate table audit_test
/
Create or replace procedure commit_test
is
pragma autonomous_transaction ;
v_nr number;
begin
select nvl(max(log_id),0) into v_nr from audit_test;
dbms_output.put_line('Maximum before autonomous txn In Child:'||v_nr); --(0)Autonomous Child txn wont see uncommitted changes in Parent.
insert into audit_test values('laksA',sysdate-1,'TestA',2);
commit; --Autonomous Child txn has to a commit/rollback always
end;
/
declare
v_nr number ;
begin
set transaction isolation level serializable name 'Parent'; -- named 'Parent' txn
-- This main/parent transaction sees db as of this time for multi -statement consistency
-- Without "serialization" isolation level this main txn would see the committed changes of autonomous child txn below

insert into audit_test values('laks',sysdate,'Test',1) ;
commit_test; -- calls autonomous child txn
select max(log_id) into v_nr from audit_test ;
dbms_output.put_line('Maximum after autonomous txn In Parent : '|| v_nr); --Output should be 1 with "serializable" and 2 without it in parent txn.
rollback ; -- Doesnt affect committed changes of the autonomous child transaction.
end;
/


Nested Transaction eg (In PL/SQL_)
create table audit_test
(
name varchar2(20),
join_date date,
identifier varchar2(200),
log_id number
)
/
Truncate table audit_test
/
Create or replace procedure commit_test
is
v_nr number;
begin
-- set transaction name 'Child' ; You cannot start a true nested txn like this.
select nvl(max(log_id),0) into v_nr from audit_test ;
dbms_output.put_line('Maximum before child txn in Child:'||v_nr); --(1) Nested Child txn always sees uncommitted changes in Parent txn
insert into audit_test values('laksA',sysdate-1,'TestA',2);
commit; --Everything done in Child as well as in Parent prior to callign Child gets committed.
end;
/
declare
v_nr number ;
begin
set transaction isolation level serializable name 'Parent'; -- named 'Parent' txn
-- This main/parent transaction sees db as of this time for multi -statement consistency
-- Parent would always sees uncommitted/committed changes in nested child txn.
insert into audit_test values('laks',sysdate,'Test',1) ;
commit_test; -- calls nested child txn named 'Child'
select max(log_id) into v_nr from audit_test ;
dbms_output.put_line('Maximum after Child Txn in Parent '|| v_nr); --Output should be 2 in parent txn always.
rollback ; --No Use at all .
end;
/


Looks like there is no true nested transaction API support in Oracle.

All these transaction concepts esp isolation level for a transaction implemented by DBMS vendors are requirements from TPC(Transaction Processing performance council) which sets few standards and req, to publish benchmark results.
JTA/JTS (java transaction API/java transaction service) is also a driving force for Transaction API provided by vendors. Oracle implementation of transaction API is much different from other DB vendors and also for performance/integrity reasons Oracle doesnt provide some features.

Monday, July 16, 2007

Cheat Attacks And Kernel Patches

It seems i blog once in a new moon these days.
I was going through an article http://www.cs.huji.ac.il/~dants/papers/Cheat07Security.pdf
Thought very nice and let me share it with readers of my blog.
Another interesting article i read on a Linux site is on a kernel patch introducing 2 new metrics PSS(Proportional Set Size) and USS(Unique Set Size) to find out more exactly how a process is using memory in a Linux computer system.
While existing VSS (Virtual memory size) and RSS(Resident size) of a process give you a picture of how much memory a process uses they never give you the exact picture of how much really your process contributes in memory use.
Here is the link , read on..
http://www.linuxworld.com/news/2007/042407-kernel.html
And one of my friends joked about writing only technical articles in my blog which makes it rather boring for him , i 've decided to add interesting incidents in my professional life as well
here.

Saturday, January 20, 2007

Emerging Trends

Aah. i am back to Google's blogger and I am posting this after nearly a 10 month gap. Finally managed to overcome all the forces which kept me away from blogging.
Let me go straightaway to things which i see are emerging in a bigger way with Oracle:
1) Oracle's Fusion Middleware-gaining grounds with lots of new standards like Java 5,Web2.0 and Java Persistance (JPA ) standards
2) Oracle Enterprise Manager Grid Control -(EMGC)- One stop place to manage all your Oracle products and related infrastructure , data center ,grid automation and also a whole lot of plugins to manage and monitor other vendor products in an enterprise.
3) Grid Computing in Oracle's vision has seen many changes in recent years with the emergence of whole lot of new features with Oracle 10g database and plenty of others to come with Oracle 11g database.
(One i am curiously waiting for is the Query Caching or Result Set Caching with Oracle 11g database from application development perspective and the other much waited featue is RAC Rolling upgrade with 11g from Database administration )

Monday, February 27, 2006

Spfile or Pfile dilemma

Here is something which would be useful in determining what was used to start an oracle instance:
We have 5 possible answers(Clearly Spfile is the way to go in future..just in case you are still having 8i and 9i instances and switching over to spfiles):

1. On Startup the DB first looks for spfileYourSID.ora in the default location (eg On windows it would most probably be $ORACLE_HOME/database or $ORACLE_HOME/dbs on unix) and then looks for spfile.ora in the same default location.Spfile in the default location is used with the default name and no pfile was used.

This would be the case for all default 9i DBs created from 9i onwards.

2. A pfile in the default location with the default name(init.ora) was used and a nondefault spfile was used using IFILE parameter in pile pointing to the non defailt location of spfile.

Here on instance startup the db looks for default spfile and if not there goes for default pfile which in turn points to the spfile in nondefault location.

3. A nondefault pfile was used and a nondefault spfile was used.

Here you specified a non default pfile on instance startup for some reasons known to you.

4. A pfile in the default location with the default name was used and no spfile was used.

i.e It's high time you read about spfile and switch to spfile here.

5. A nondefault pfile was used and no spfile was used.

Same as above in case 4.

Friday, January 27, 2006

Blogging and drawing inspiration --Mistaken

In my previous article,i had posted some tips for some quick real time tuning using OEM (9i) without the actual screen shots(for those who view it in other than IE) and for those who want to see the document with pictures here is the download link.
http://rapidshare.de/files/11994839/
Guide_to_Real_Time_Performance_Tuning_
Of_Oracle.doc.html


It would be a good starting point for those who want to see how OEM can be used for tuning ..Pls read the Oracle documentation carefully if you need more understanding of the underlying concepts and issues.

Again , i reiterate....
--------------------------------
It's a natural thing that everyone wants a blog to write whatever they feel and want to share some good things they are interested with others.
When i post good articles from people who have a great experience and well known in Oracle arena here, i do it with the genuine and sincere attempt to just share it with my friends and not to gain any other thing.
If copying the blog template and posting articles by others in my blog hurt/affect the original authors they can mail me so that i would stop doing so.
I accept my mistake that i had misspelled the author's name in one occassion and express my sincere apologies to the author.
Well, i hope people would see that i am genuine here and have nothing here to take credit/gain popularity/"whatever" one can possibly think of.

-------------------------

Real Time Tuning using OEM--Some Tips

This is an article which i had prepared for one of my clients in using OEM for some quick tuning and as a starting point for configuring , using OEM & its tuning packs.

Unfortunately i couldn't post the pictures here ..would be posting it shortly.

Guide to Real Time Performance Tuning Of Oracle

Version 1.0

Jun 26, 2005


CONTENTS

Guide to Real Time Performance Tuning Of Oracle. 1

CONTENTS. 2

INTRODUCTION. 3

OEM Tuning Pack. 3

OMS and OEM Repository. 3

Configuring the OEM console for using the Oracle Management Server (OMS) option. 4

Index Tuning Wizard. 10

Oracle Expert. 10

SQL Analyze. 10


INTRODUCTION

Oracle Enterprise Edition provides a lot of features for Real Time Tuning of an Oracle Database which can be used efficiently to serve as an aid in the tuning process and can help the DBA in identifying the performance bottlenecks in any Oracle system which undergoes constant changes as a result of business changes and also in improving the response time of the application to the satisfaction of the end users.

Oracle Enterprise Edition provides a lot of GUIs in its Tuning Packs that can be launched from OEM console with the Oracle Management Server (OMS) login option.

OEM Tuning Pack

OEM Tuning Packs include the Index Tuning Wizard, Oracle Expert and SQL Analyze as some of the important tools which can help the DBA in accelerate the tuning and also more importantly identify the workload pattern to get the accurate tuning recommendations for the Oracle Database under tuning. These tuning packs require the OEM console to be configured for using with an OEM repository under the OMS option.

OMS and OEM Repository

The Oracle Management Server (OMS) uses an OEM Repository to store all the tuning session details and many other DB related task details in it. OMS uses this repository to store all the tuning statistics and Job details(if jobs are configured) apart from many other things in the OEM repository for all the target databases to which one connects from the OEM console using the OMS option.


Configuring the OEM console for using the Oracle Management Server (OMS) option

OEM console needs to be configured properly for using the Oracle Management Server option. This can be done using the Enterprise Manager Configuration Assistant(EMCA) that can be launched in Oracle for Windows NT platform from the Configuration and Migration Tools menu.

The following steps would illustrate the use of EMCA to configure OEM console for using the OMS option:

STEP 1:


STEP 2:

If you are configuring the OMS to used an OEM repository for the first time, then select the first option. Selecting this option enables one to configure the local OMS on the server. This configuration would create an additional Windows NT service with the naming convention OracleManagementServer. This would then be the NT service responsible for OMS afterwards and needs to be in “Started” state at the end of this configuration process.


STEP 3:

This step would help in creating a repository for the OMS in the server. This OEM repository for the local OMS is just another Oracle Schema which can be configures either in one of the existing databases or in a new database exclusively for the OEM repository.

If you are creating a repository for the first time use the first option here.


STEP 4:

Depending on whether you need to create a new database exclusively for hosting your OEM repository or use one of your existing databases for storing the OEM repository choose either the “Typical” or “Custom” options here.


STEP 5:

If you have chosen the “Typical” option here then you would end up in the following screen which would help you in creating a new database with the default naming conventions, a new Schema (a Repository user along with the configuration data in tables and associated objects under the user).

This new database would be like any other database except for the fact that it would be hosting the OEM repository and everytime one would be connecting to this database whenever you are launching OEM console using the OMS option.


In case you want a “Custom” configuration of OEM repository then you would be presented with the following screen from where you can choose to create a new database( as above) or use an existing database for hosting the OEM repository as below:

One would be prompted to give the OEM repository name and password details later. In case you are choosing an existing database option then you would be prompted to furnish the details of an user having DBA rights in the existing database so that using this DBA login credentials the OEM repository schema can be created in the existing database.

The screens which follow from hereon are self explanatory and any user having a basic understanding of the Oracle DB should be able to finish the configuration process.

In case you run into any problems in configuring the OMS you can always restart the whole process at any time and even if you have ended up with a wrong configuration , one can always drop the repository and recreate a new one.


Index Tuning Wizard

This wizard helps us in arriving at a proper and effective indexing strategy. This Index tuning wizard can go a long way in helping the DBA arrive at a right combination of the indexes for the application and helps in improving the response time of the SQLs which consume more system resources by identifying new possible indexes.

This wizard also gives justifications for the recommended indexes and also generates the script to implement the recommended indexing changes. These recommended changes and the script can then be reviewed by the DBA to arrive at a final Indexing recommendations. Thus the tool helps the DBA in Indexing strategy and thereby accelerating the tuning process

Oracle Expert

This is a very powerful tool which can automate the process of collecting statistics at database, instance, schema and workload levels and help in arriving at recommendations at a more generic level. With real time workload and actual CPU, Memory details of the live system, Oracle Expert recommendations would be extremely effective in performance tuning to the DBA. The manual way of collecting Statspack reports and then using them to arrive at tuning recommendations can be partially automated using this Oracle Expert and also help the DBA in speeding up the tuning process by arriving at recommendations not only at the SQL level but also at the Instance level.

SQL Analyze

SQL Analyze is a powerful SQL tuning tool which comes along with many other wizards like Virtual Indexing wizard, Hint Wizard etc.

SQL Analyze helps the DBA in automating the tuning of resource intensive SQLs by identifying the SQLs under different categories similar to the SQL reports sections in Statspack reports and helps the DBA in tuning them with many supporting utilities like Virtual Indexing wizard.

Virtual Indexing Wizard in particular is one of the most useful and effective tool which helps in identifying the performance improvement without actually creating those recommended indexes.

Conclusion

Oracle tuning of existing systems/already running systems needs to be done with extra care so as to minimize those changes which need more development/maintenance effort and also possibly those inevitable testing efforts. At the same time tuning should also meet the end users objectives in the shortest possible time. Oracle’s tuning tools described here come very handy for the DBA to meet the above requirements.

Blogging and drawing inspiration --Mistaken

It's a natural thing that everyone wants a blog to write whatever they feel and want to share some good things they are interested with others.
When i post good articles from people who have a great experience and well known in Oracle arena here, i do it with the genuine and sincere attempt to just share it with my friends and not to gain any other thing.
If copying the blog template and posting articles by others in my blog hurt/affect the original authors they can mail me so that i would stop doing so.
I accept my mistake that i had misspelled the author's name in one occassion and express my sincere apologies to the author.
Well, i hope people would see that i am genuine here and have nothing here to take credit/gain popularity/"whatever" one can possibly think of.

Friday, January 06, 2006

Good Oracle Books

Thomas Kyte's Good Oracle (Must Read) for DBAs and Developers
---------------------------------------------------------------------------------
1. Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions
2. Expert one-on-one: Oracle
3. Effective Oracle By Design
(you can go for 1 as it contains 10G and also 3 if possible )

For SQL*PLUS and PL/SQL the following are good books
------------------------------------------------------------------------

1 Mastering Oracle SQL and SQL*PLUS -- By Lex De Haan
2 Mastering PL/SQL -- By Connor McDonald

For General Oracle Concepts from Scalability and Performance of Oracle Applications perspective
----------------------------------------------------------------------------------------------------------------------

1. Scaling Oracle 8i (Eventhough the name says 8i it is a very good book whose concepts still apply)
   -- By James Morle(Expert on Scalability,RAID and SAN/NAS concepts)
   (Good thing is this book can be downloaded from www.scalabilities.co.uk site -- http://www.scaleabilities.co.uk/book/scalingOracle8i.pdf)

Other Good Books are :
1.Practical Oracle 8i -- Jonathan Lewis (Again the name'8i' is a misnomer as it is a good book that holds true still)
2. Cost Based Oracle Fundamentals -- Jonathan Lewis (A very good book of recent times )
If you want to understand how Oracle’s Cost Based Optimizer works, you will want to read this book. To get you started, there is a pdf of Chapter 5 (Clustering Factor) that you can download from Apress
 


Yahoo! Photos
Got holiday prints? See all the ways to get quality prints in your hands ASAP.

Tuesday, January 03, 2006

Project Raptor is available now


Oracle has released a new tool called Raptor. This tool is a java-based GUI for accessing Oracle databases. It allows to perform SQL and PL/SQL and to browse database objects.

It includes a lot of useful features, e.g:
- Multiple database connections
- Database reports (pre-defined and user-defined)
- SQL Formatting
- PLSQL debug
- PLSQL OWA output
- ...
http://www.oracle.com/technology/software
/htdocs/eaplic.html?
http://download.oracle.com/otn/other/
raptor-0715.zip



Moreover it is free to use.

At this moment, an early adopters release is available, the production release is scheduled for early 2006.
Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you