Sunday, October 03, 2010

Software modeling and Issues

Understanding the problem/activity to solve more deeply and knowledge
of various architectural design patterns helps to evolve a
reliable,performant and scalable solution. Applying same design pattern
or getting into a fixed/rigid way of solving all problems is a major
issue and cripple the functioning of software. eg: Presentation,
compute,data oriented problems are all unique and one cannot apply a
same pattern to all these problems.
Also One need to understand is no matter how hard you analyze and design
in development or testing there would be some disruputive innovation
coming along which may force you to rethink your design if not in near
future, These disruptive innovations are inevitable and one cannot
accomodate for them in design but you can anticipate minor changes and
provide "knobs" in design to turn on/off certain minor but yet can
change performance/scalability to some extent.

It is not surprising atleast for me having spent last 10+years in
software testing on large systems to see that most of the java apps
still suffer from Concurrency and GC issues.
Lot of research is going into Concurrency area with functional
constructs similar to erlang,clojure,scala etctype coming into java 7
and more into atomic locking constructs,scaling across cores.
This is still a growing area with Software/Hardware Transactional Memory
, Message passing,shared state with more controls etc all the techniques
explored in terms of code clarity, time/space trade offs etc . Even
hardware/software co-design like that of Oracle's Exalogic,Azul compute
appliances etc)
Another area is embracing with some sort of predictive/limited or
avoiding as much as GC as possible.

I had been involved in Software testing from stability,performance,
scalability aspects right from 2000 starting with mainframes(COBOL,
CICS) to latest apps running on JVM using various design patterns.
Manytimes i tell people on things to watch out for or my opinions which
they do initially neglect and later on come back to me to say "Oh yes
you said that sometime back...i didnt get it..."

Friday, October 01, 2010

Concurrency and scaling choices

Let me make it clear for whoever is reading my blog the following:

"I neither speak for my company i work for
nor my company speaks for me.
All the ideas,thoughts and impressions on the tools i list in my blog
are my own "

JSR 166 - concurrency utilities apply for JS2E 1.5 onwards whereas JSR
237 is an attempt to take it to J2EE 1.4 onwards at the container level.
JSR 173 - Streaming xml parser aka pull parser is something that i
find needed for certain situations and is not thought of most by people
when comes to XML parsing.
(Roguewave and VTD-XML are other choices i hear from my friends but i
dont know much that i want to think about sometime later)

JSR 107 - JCache/caching in java is another big area that interests
me with many technologies emerging to support this area,
( Oracle's Coherence,JGroups with Infinispan, Gigaspaces,
Terracoata,GridGain, Open source Hazelcast are some of the useful ones
if one is interested to explore this area each one
with varying capabilities and usecases)

Wednesday, September 22, 2010

JSR 166 and JSR 173

For some strange reason i found myself needing to know the design
decisions on the 2 JSR s 166 and 173.
If time permits would cover a blog on why i feel these two run in my
mind of late and the importance of them with regards to

On a side note, i see lot of people often either use wrong API or
reinvent the wheel possibly they dont know the merits or unable to use
the standard tested APIs/utilities
and sometimes stray into disasters both correctness and performance of
the intended functionality.

Saturday, September 11, 2010

Most often Stack/runtime does it better than you

Throughout my interaction with many developers/experienced people i
sometimes find it hard to explain/convince people that the code written
by runtime is cheaper and well thought of than trying to do the same in
application layer.

Most of the tracking/diagnostic are well handled by the technology stack
or the runtime. eg; Your OS,DB or your VM can do much cheaply and safely
the tracking and diagnostic capabilities for your application running on
top of them. So there is no reason why you one may need to do that same
in your application code and you can focus on your business/logic of the
application. Yet one may have to ocassionally use the underlying
diagnostic facilities/external API exposed to add some context which may
be the only thing i see missing in the underlying diagnostic
capabilities exposed by the stack/runtime for you.

Having said that things are different for each stack/runtime today and
the extent to which you can use the underlying diagnostic may vary and
rarely you may see benefit in writing some code on your own for doing a
diagnostic tracking/control mechanism

Friday, September 10, 2010

Do surrogates really suck?

Do Surrogates really suck in performance profiling?

I was going through an article on Performance of applications by a noted
Oracle expert where it mentioned that surrogates suck when it

comes to profiling applications for response time. Let me make it very
clear that i dont dispute the person here but to drive a point that
While response time is ideally the best to measure and understand the
profile of an application/process it may not always be possible to do
that considering overheads. In such cases a careful choice of the
surrogate measure depending on the technology should help in
understanding the profile. So the answer is yes and no.

For me manytimes the thoughts and ideas which i bring in from the
various fields of exposure help me in understanding the performance of a
system or even better/cleaner ways.

Tuesday, September 07, 2010

Model is thy code

I rarely get time to write on my blog over the last few years. But when
i do there are few good motivations/forces which drive me to write

on things which i consider are important.

After over 10 years in IT I am thrilled to see that i have been always
working on projects which have stability&performance as one of the

key goals/objectives( if not the most important) and was lucky enough to
put my hands on various technologies starting from

IBM Mainframes(MVS/OS 390,JCL,COBOL,CICS.REXX and DB2 to some extent),
UNIX C,C++ saga,
VB,ASP,HTML*, Web technologies
.NET,J2EE systems,
Teradata and ofcourse Oracle technologies

Some of the important projects that i have worked include a one for a
major Telco(for its IT LOB) in UK in their core Intercarrier-billing

and Provisioning systems/OSS-big Legacy conversion project. I had also
been a witness to SOA/WebServices tech(as of 2004) that i happened

to come across in one of the projects which though didnt materialize for
the customer.

It has been a mix of all round exposure in IT and Business side that i
have been witnessing all through these years and thanks to my Alma

Mater consulting firm that i got chance to put my hands on in all these
areas.All along these years i keep getting a more deeper

understanding of the trio- People, Processes and technologies in
projects and how they shape up and interact with each other to deliver.
Often not surprisingly it is the people who play a positive/negative
dominant role resulting in successful/failed projects.
It is a nightmare to imagine working in a large conversion of a big
system X where several teams/stakeholders are involved.
The problem is not in technology/tools or the interaction between
systems but the complexity in interaction with people of various systems

that the project depends on.

Back to the topic of this blog post, "Model is thy code" . I had been
saying this for long time and in fact is one of my favorite

quotes(Ofcourse i dont claim/do not know who may have coined it first)
which i coined way back in 2003 i.e "Model is the code" and most of

the libraries that i come across miss this pivital concept. If your API
is not based on a solid model then you are sure to see your

software using it develop usability/scalability issues in long run.
Correctness/Completeness is a important 'C' factor along with usual

other C's that pundits claim for performance&scalability
(C-Concurrency/parallelism in today's multi-core/proc world,C-Contention

C-Coherency).Often It just simply stops at correctness and it doesnt
make sense to move forward.
Though it may not be possible for someone interested in other 'C's to
always ensure 'C-Correctness' factor atleast one should take a step

and think on it for it is posssible that your other 'C's may get
affected or doesnt make sense to proceed ahead.

Looking at the crap/nonsense that is out in blogs/sites on Software
performance & scalability i can only feel pity for the sheer lack of
understanding on the subject and the dangerous mix of false claims/ideas
of those who havent had any hands on on the technologies involved,

Enough of my time has been spilt on Software testing& performance.It is
time i take initiative to move onto something afresh that keep me going,