Wednesday, September 22, 2010

JSR 166 and JSR 173

For some strange reason i found myself needing to know the design
decisions on the 2 JSR s 166 and 173.
If time permits would cover a blog on why i feel these two run in my
mind of late and the importance of them with regards to
performance&scalability.

On a side note, i see lot of people often either use wrong API or
reinvent the wheel possibly they dont know the merits or unable to use
the standard tested APIs/utilities
and sometimes stray into disasters both correctness and performance of
the intended functionality.

Saturday, September 11, 2010

Most often Stack/runtime does it better than you

Throughout my interaction with many developers/experienced people i
sometimes find it hard to explain/convince people that the code written
by runtime is cheaper and well thought of than trying to do the same in
application layer.

Most of the tracking/diagnostic are well handled by the technology stack
or the runtime. eg; Your OS,DB or your VM can do much cheaply and safely
the tracking and diagnostic capabilities for your application running on
top of them. So there is no reason why you one may need to do that same
in your application code and you can focus on your business/logic of the
application. Yet one may have to ocassionally use the underlying
diagnostic facilities/external API exposed to add some context which may
be the only thing i see missing in the underlying diagnostic
capabilities exposed by the stack/runtime for you.

Having said that things are different for each stack/runtime today and
the extent to which you can use the underlying diagnostic may vary and
rarely you may see benefit in writing some code on your own for doing a
diagnostic tracking/control mechanism

Friday, September 10, 2010

Do surrogates really suck?

Do Surrogates really suck in performance profiling?

I was going through an article on Performance of applications by a noted
Oracle expert where it mentioned that surrogates suck when it

comes to profiling applications for response time. Let me make it very
clear that i dont dispute the person here but to drive a point that
While response time is ideally the best to measure and understand the
profile of an application/process it may not always be possible to do
that considering overheads. In such cases a careful choice of the
surrogate measure depending on the technology should help in
understanding the profile. So the answer is yes and no.

For me manytimes the thoughts and ideas which i bring in from the
various fields of exposure help me in understanding the performance of a
system or even better/cleaner ways.

Tuesday, September 07, 2010

Model is thy code

I rarely get time to write on my blog over the last few years. But when
i do there are few good motivations/forces which drive me to write

on things which i consider are important.

After over 10 years in IT I am thrilled to see that i have been always
working on projects which have stability&performance as one of the

key goals/objectives( if not the most important) and was lucky enough to
put my hands on various technologies starting from

IBM Mainframes(MVS/OS 390,JCL,COBOL,CICS.REXX and DB2 to some extent),
UNIX C,C++ saga,
VB,ASP,HTML*, Web technologies
.NET,J2EE systems,
Teradata and ofcourse Oracle technologies


Some of the important projects that i have worked include a one for a
major Telco(for its IT LOB) in UK in their core Intercarrier-billing

and Provisioning systems/OSS-big Legacy conversion project. I had also
been a witness to SOA/WebServices tech(as of 2004) that i happened

to come across in one of the projects which though didnt materialize for
the customer.


It has been a mix of all round exposure in IT and Business side that i
have been witnessing all through these years and thanks to my Alma

Mater consulting firm that i got chance to put my hands on in all these
areas.All along these years i keep getting a more deeper

understanding of the trio- People, Processes and technologies in
projects and how they shape up and interact with each other to deliver.
Often not surprisingly it is the people who play a positive/negative
dominant role resulting in successful/failed projects.
It is a nightmare to imagine working in a large conversion of a big
system X where several teams/stakeholders are involved.
The problem is not in technology/tools or the interaction between
systems but the complexity in interaction with people of various systems

that the project depends on.


Back to the topic of this blog post, "Model is thy code" . I had been
saying this for long time and in fact is one of my favorite

quotes(Ofcourse i dont claim/do not know who may have coined it first)
which i coined way back in 2003 i.e "Model is the code" and most of

the libraries that i come across miss this pivital concept. If your API
is not based on a solid model then you are sure to see your

software using it develop usability/scalability issues in long run.
Correctness/Completeness is a important 'C' factor along with usual

other C's that pundits claim for performance&scalability
(C-Concurrency/parallelism in today's multi-core/proc world,C-Contention
and

C-Coherency).Often It just simply stops at correctness and it doesnt
make sense to move forward.
Though it may not be possible for someone interested in other 'C's to
always ensure 'C-Correctness' factor atleast one should take a step

and think on it for it is posssible that your other 'C's may get
affected or doesnt make sense to proceed ahead.

Looking at the crap/nonsense that is out in blogs/sites on Software
performance & scalability i can only feel pity for the sheer lack of
understanding on the subject and the dangerous mix of false claims/ideas
of those who havent had any hands on on the technologies involved,

Enough of my time has been spilt on Software testing& performance.It is
time i take initiative to move onto something afresh that keep me going,