Posted by: imranadeel | August 11, 2010

Possible Layouts of IBM WebSphere Products with Temenos T24

Whenever a bank purchases a Core Banking System (CBS) it definitely calculates the TCO of the software and hardware required to run the new CBS.  The cost of software products, of late, has been pegged with the specifications of the server, especially the type of processor being used, and the number of cores in it.  This factor increases the prices of such software manifold if they are being used in a hi-fi environment.  Imagine what a bank like HSBC would have to pay for the licenses of WebSphere MQ, versus what a community bank would pay for the exact same product.  I have observed that this factor has a direct implementation in the way organizations would deploy WebSphere Application Server (WAS), IBM HTTP Server (IHS) and MQ Server (MQ).

In an ideal situation, the web server (IHS) should be separate from WAS.  Since WAS (hosting the Java application) would pass messages to the underlying main application server (like T24), thereby necessitating having MQ installed at WAS and T24 server machines.  Take a look at the image below.

Ideal Implementation: IHS, Firewall, WAS+MQ and T24+MQ

Ideal Implementation: IHS, Firewall, WAS+MQ and T24+MQ

While the IHS-WASMQ-T24MQ installation is ideal, it is expensive: you have to buy MQ licenses; lots of them.  Usually, a middle-ground is sought so that the licensing costs can be rationalized.  The first option is to separate out MQ completely as a new server.  This certainly has its implications in that the messages can still be lost at WAS and T24 servers.  The image below shows this setup.

IHS+WAS, MQ and T24

IHS+WAS, MQ and T24

Another option could be a setup where MQ is installed on the WAS and IHS server.  But in this setup, T24 is still without MQ.



Yet another option is to install IHS, WAS and MQ on the same server.  This means that the server specifications should be sufficient to handle three server applications, especially WAS and MQ.  The newer versions of T24, such as R8 onwards, have extensive front-end developments, which are bound to put pressure on WAS.  Therefore, the server has to be equipped proportionately.

IHS+WAS+MQ, and T24+MQ

A Good Alternate: IHS+WAS+MQ, and T24+MQ

But as stated earlier in this post, having MQ installed at T24 server would inflate the bill.

I mentioned in a previous MQ related post that if a server/software component goes down, and the timeout setting for the channel (to which that server belongs) is set to the number of seconds during which the server/software component may not recover (such as 180 seconds) then there’s not much that MQ can do to help anyway.  This is because if the server/component becomes operable after the timeout duration, then even if it picks up messages from MQ, they would not be processed as the session would have timed out.

In the end, I would repeat what I mentioned in the beginning of this post: your architecture depends on your budget.  Ideally, MQ should be there whenever information leaves one physical server and enters into another.

Imran Adeel Haider

This post is also available at

Posted by: imranadeel | July 31, 2010

Fare thee well, Farook Ali Khan!

Farook Ali Khan - 2004

In the plane crash on 28th July in Islamabad, we lost one of the finest men known to all in the Pakistani IT industry: Farook Ali Khan. I have the honor of working with him from 1999 to 2006, in CSoft (Islamabad) and AMZ Access (Karachi). Farook was a great leader and a thorough gentleman. He always set his aims very high, and played the big game. AMZ Ventures (the holding company of AMZ Access) was Pakistan’s second ever technology IPO. Farook was the brain behind it and its business plan. What made him different from other corporate leaders was the fact that he was an inspiration in the true sense. He worked wholeheartedly, passionately, diligently and smartly. He was a great salesman, and would always put his point across most convincingly to his clients and his team. He nurtured a whole line of people to work with him. I was one of those lucky people. He taught us how to think, act, work, present and carry ourselves in tough situations, both, in business and life.

In addition to his corporate and technology leadership, he was also a great patriot and a very informed Muslim. Another aspect that set him apart from other leaders was the fact that to all of us who worked with him, he was a mentor. He would take us out for a dinner quite often and listen to what was bothering us. He would then give us his insights and perspectives about our challenges. It was in those sessions that we all bonded with him, and learned how to think and act maturely, sensibly, logically and creatively. He also had a very high set of ethics, and never offended anyone.

To us, he was like family; the elder brother. I am thankful to Allah for his companionship, sad because I had not met him in a while, and bereaved over his sudden demise. I pray for the best for his soul in his next abode, and for patience for his family.

Farewell Farook sb. I’ll miss you my entire life. And thank you for all that you did for us.


Posted by: imranadeel | July 22, 2010

My Latest Psychedelic/Prog-Rock Discovery: Jane

22nd July 2010.

A friend of mine asked me to search for a German band “Jane” on youtube and listen to their album “Together.” I found the album, and listened to the tracks “Spain” 1 and 2. And by golly, I was taken away. The sound is so rich, and the music is so deep, it is somewhat similar to the great Pink Floyd’s Echoes album. I got hooked to it right away. It’s good to know that Prog-rock clicked in other countries as well, other than UK. At the same time, I am surprised as to why we haven’t heard of these fellows before. The stuff they produced is so well that I haven’t even listened to anything other than Spain yet. I will listen to the rest when I get out of it.

And what’s even better, is that these guys are still playing. You can check their website and complete discography at

Thank you Sina K, for the great introduction.


Posted by: imranadeel | July 17, 2010

WebSphere MQ Server with Temenos T24: Part 2

16th July 2010.

This is the second entry on MQ Server with T24.  The previous entry can be found here.

If you read my last post, you would think that I am saying it’s OK to use T24 without MQ Server.  It’s not.  I will shed light on various aspects of it as I go along.

First off, MQ offers message persistence; not having which means that if you have a server crash or your network goes down, the messages will be lost either at a host, or in transmission. For instance, a teller clicked on the option to deposit cash in a customer’s account.  The request came to the application server, and then went to T24 server.  Now, the server processes it and responds with confirmation.  On the way back, the web server went down, or the network broke.  Now the teller will not receive anything except a timeout message (in case of a network failure), or nothing at all (if the web server crashed).  But the transaction has already been committed at T24.  The teller would try to re-login and post the transaction again.  If the failed component is up, the transaction would go through, and the teller would get a confirmation from T24.  So now T24 would have two transactions instead of one.

Message persistence could have been of help here, if implemented with some level of clustering at the web (and application) server and MQ server levels.  Like had there been a web server crash, and there were two web servers clustered together, then the other web server would have been able to receive the message – only if there had been an MQ server, from where it could poll the incoming messages.

T24 Servers Layout

Assembly of servers and their interconnections

Similarly, if the T24 server goes out of business right when a teller sends a withdraw request, MQ would hold it till T24 server is back.  TC Server will pick the message from the MQ’s “IN” queue and process it; all this without the teller actually knowing that there was a problem.  This, of course, assumes that the T24 server is back within a time span not longer than long enough.

But therein lies another problem.  You see, you configure timeout on every browser channel.  This is typically 180 seconds (3 minutes).  If the web server doesn’t receive a response from T24 (or MQ) within this time span, it will time out, and show the user the error screen.  The user can then only go to the login page, which the web server hosts.  So even if MQ is holding a message in its queue, and the other party/network is not back within 3 minutes, the session is going to time out anyway, and the user will not be able to use the same message.  It will be “orphaned out” in MQ, as no party would be interested in taking it.  MQ server will wait till its timeout setting allows it, and then remove this message from its main queue, and put it in the Dead Letter Queue.  So even if you have MQ server, if the transaction is delayed for 3 minutes (or whatever is your timeout setting at the browser/TC Client and TC server levels), the session would expire anyway, thus essentially ignoring whether the transaction ever went through or not.

My advise here is that the IT executives should tell their users in the T24 early adoption/training sessions that if they face a timeout after posting a transaction, and have to login again, they must first check whether the transaction they just posted actually went through or not.

In my next post, I will explain the scenarios where MQ Server really comes into play as an essential component in the core banking system.  Your feedback and comments are always welcome.


Posted by: imranadeel | July 16, 2010

WebSphere MQ Server with Temenos T24: Part 1

15th July, 2010

We know that Temenos T24 has a browser component, which offers the web-based UI for the core banking system.  This component connects to the core through a module called TC Client, which is installed as a Java WAR file on the web server (the web server is typically IBM WebSphere Application Server).  On the server side, there is a component that acts as T24’s window to the outside world, called TC Server, which is installed on the T24 Application Server.  To facilitate message passing between the server and web browser, there is a queuing server in between (which is typically IBM WebSphere MQ Server).

By design, Temenos T24 doesn’t actually need an MQ server.  One would rather need MQ for guaranteed message delivery from the browser/TC Client to TC Server.  I have seen a bank run T24 without having MQ at all.  This is so because TC Client can talk to TC Server via TCP/IP as well.  The downside of having this arrangement is that there is no message persistence, and in case of a disaster, the server may not receive a request, or a client may not receive a response at all.  The sample Channels.xml file that comes with TC Server has (by default) enabled TCP based channels between TC Client and TC Server.

As long as a bank is willing to take this risk, it is ok to not use MQ Server at all.

But then, things are not as simple in real world.  I’ll explain that in detail in the next entry.

Imran Adeel Haider.

This blog entry is also available at

I am not sure about whether this practice is still there or not, but till about three years ago, Islamic Banks would finance something (in any mode) but benchmark it against KIBOR (in Pakistan, at least).  When we asked some Islamic Finance scholars about why they were benchmarking their rates with KIBOR, they responded that if a seller of a soft drink keeps the price of his tin can equal to those of beer, then that’s not wrong, and thus this practice was ok.

Now that I am more involved in corporate finance of my company, I am very confused about this issue: Financing businesses is not like a soft drink, which is more of  a luxury, or a feel good thing; it is like water.  It is very hard to survive without it, if not impossible.  Businesses (and we) need funding to grow, expand and enhance our capacity.  If there is no equity available, we have to turn towards financing.  Now, financing is like water, which is a necessity: We don’t have other halal options of financing.  Therefore, if a seller of water bottles ups his price to equate that of beer (which obviously is high, and calculated using Haram bases and practices) then isn’t it wrong?  Doesn’t it categorize automatically as exploitation, usury and coercion, just like conventional banking?

With such questionable practices in the Islamic finance industry these days, it is hard to even talk about going for Islamic mode of financing with my CEO, who would laugh at my face if I tell him to do so.

Can we persuade the Islamic banks to stop doing it this way?  Can the Shariah advisers stand up to instruct their institutions to abandon such practices?  OR can someone correct me if that is not the practice anymore?


This point is also posted as a question on IBFNet at

by Imran Adeel Haider

This is the concluding bit of my thoughts about the title subject.

I had touched upon the problems that are typically faced in large sized projects, with reference to core banking system implementation projects. Now I would hint upon the ingredients that should or must be parts of a winning strategy to succeed with lesser headaches and overruns. Corollary: Generally, such projects overshoot their time, effort and/or budget estimates, no matter how hard you plan. You cannot know all the unknowns in advance, unless you have a very experienced team with you – which you generally don’t have.

Get the Right People

Who are the “Right” people? I would provide some qualities of them in the following lines:

1. Team oriented – taking everyone along

2. Offering creative solutions to problems thrown at them AND motivating others to think likewise

3. Disrespecting boundaries. Wait, that was a harsh statement. Read it this way: Going beyond boundaries to get things done AND not making enemies in the process

4. Taking responsibilities and initiatives for things not directly under them AND encouraging others to do the same

5. Having fun in their work, AND making it fun for others equally well

6. Having a passion of success

7. Ready and willing to challenge all assumptions; nothing taken for granted

8. Technical expertise: Adept in technology, and aware of how it can benefit them and their work

9. Analytical minds: Ability to get to the core of things and then use creativity

10. Comprehension

11. Strong nerves (because such projects will have great pressure)

12. Dedication towards the common goal

Train Your Teams

Ensure that your teams are trained to the max. There must be functional as well as technical training delivered to your teams. There is no other shortcut; there is no other way. This HAS to be done. The teams have to be trained to the max. I cannot stress it enough. The threats are plenty: Attrition, reassigning people to difference projects/departments, team expansion, and the same problems at the implementer’s part. The implementing companies do not have many people, and there is more work to do than ever. Therefore, it is natural for things to slip. Train your people sufficiently, and keep a cycle of training for the new hires. And then pat yourself in the back when this pays off.

Introduce Quality Assurance and Quality Control

QA and QC are different: QA deals with measures to ensure quality; it is proactive and is planned before the project begins. QC is reactive, and happens during or after the deliverables are churned out. There is a need of having people who could define the quality standards, with the help and input of business users. QC people cannot do it in isolation. Even if they do, they would be limited to the interface controls standards, such as the layout of controls (textboxes, lists, combos etc.), their sizing and the input validation as per the underlying data types. Hold business users accountable for the quality of the application along with the QC personnel. That is the only way they will take the ownership and stand behind the final product.

Effective Source and Version Control

While source control tools are available for modern languages, systems like Temenos T24/Globus come with their proprietary languages. You can still use version control software, and a procedure for file check-in and -out. Ensure that your people use versioning and source control religiously.

Users will revert to their previous requirements, a new manager may like to revert a newly developed process back to how it worked before, and some user may realize that he/she was wrong in defining the requirement a certain way. All of these will require the old files to be restored. Version control can come to rescue here.

Scope Finalization

Things will always evolve. The users would think of new possibilities, or realize that they were mistaken in mentioning the requirements the way they did. Therefore, the requirements document will be a living document (if there is one). I can never say that you should go waterfall – the old and classic model of software delivery. But you should be effective in prioritizing and classifying things as critical, important, normal or frill, and then assigning them a target version. This way, you will not lose track of things when you are in a next phase, and still deliver the most critical requirements in the current.

Unrealistic Timelines: Induce Reality

The business people and users would want the application yesterday, and the implementers will promise it to be delivered somewhere around the same time. After all, they need to win the contract. You should know that things can, and most probably will, get delayed. While the implementer is the right party to give a timeline, I have experienced that they don’t. They have to live up to their promise of delivering in a certain time-span. To avoid surprises, you must invoke reality. In the beginning, you have no idea about how long it would take to deliver, as the exact scope of customizations is not clear (and it will not be very clear till very late in the process). Also, the technology may be new, and the way things work in the new system could be different than how they were in the old one. Here are the right ingredients for assessing how long it would take:

[ (Feature x Time required to develop the feature)/(Available resource hours) ] + Accommodating your Past Experience and Empirical Evidence + Adjustment for Gut feel + A Month or Two

And then, keep reassessing the date as you go.

I really liked Jim McCarthy’s (former Program Manager of Visual C++ Group at Microsoft) statement: Don’t trade a bad date for an equally bad date. Don’t say you know when you don’t know.

Use Some Quality Control Tools

There is a wide variety of tools available in the market: IBM Rational, HP LoadRunner, Validata, FrontOffice Technologies and WebLoad. Use them to test your applications’ operating limits. It may happen that one fine day your implementer may tell you that your hardware is not sufficient to meet with the computing requirements that your users have asked for in the form of form input validations and lists of values etc. At this time, you must verify the network usage, Disk IO, processor consumption and memory utilization of the application. For network usage stats, a very simple but effective tool is DUMeter, which tells you of the total incoming and outgoing volume of data. But it doesn’t tell you which application is using how much bandwidth. We have used Rational Performance Tester and WebLoad, and found them to be decent enough to get you going. But beware of the licensing requirements. If you haven’t budgeted their costs, they may come and hit you later when you need them.

Keep an Audit Trail of Requirements and Features

You need to record every piece of information that related to the project, its modules and the user requirements. Use some tool to collect and classify information. This tool should be able to offer some bit of workflow for management approval of features and issues, some file management ability to store docs, spreadsheets and images with features, assign priorities, offer a space for providing comments (for users, developers, analysts, QC people and management etc. ), and keep track of progress. It should also provide you with reports about the pending, assigned, resolved and closed issues.

I have found out that Mantis ( to be a very effective tool. It’s an open source and free tool, built using PHP and MySQL (which are free as well), is very easy to install and manage, and offers great control and features. I simply love it, and so do my clients. I highly recommend using it throughout your project. Just make sure that you take regular/daily backups of your MySQL DB and the Mantis folder (because it keeps the attached files in the directory structure, and not in the DB).

Take Care of the Human Aspect of Things

Ultimately, it is more of people management than technical management. If you have the right people (as mentioned above), you should be able to pull through unscathed. Ensure that your team is performing at its optimum, and there are no frictional issues between them. Keep the environment lively but professional. As Jack Welch writes in his book “Winning”, leaders celebrate. Keep appreciating your teams’ efforts.

Mix experience with fresh blood in your teams. This way, you will have the maturity and agility, which is the mix that you require most. Give people charge of their assignments, but keep monitoring them until you realize that they are now habitual. And even then, you may want to have some random checks. Never let go of the situation and always keep a handle on things. But don’t be a control freak either. If you become one, you will hurt the project, your organization, and yourself the most.


These were my thoughts about the challenges that I have experienced myself, or have observed with our clients, who have gone for new core banking systems recently. I wish them all the best of luck.

Our readers can always provide us their comments, suggestions, critiques and acknowledgements. We appreciate your taking time out to correct or appreciate us.

In the next few issues, we will talk a bit more technical; about middleware and what difference it can play. Till then, take care of yourself and your projects.

Imran Adeel Haider.
2nd March 2009.

This blog will also be available at

This is a sequel to my previous entry, titled “Challenges to the Core Banking Solutions Implementation Projects – 1.”

Continuing on the problems faced in core banking solution implementation projects, let us look at some other problems.

Lack of IT Resources with Banks
Usually, the core banking solution vendors claim to offer lower overheads, advanced technology, lower maintenance costs etc.  After all, such claims win the clients and especially their finance people, as they see IT as a big cost center instead of a strategic asset.  Vendors are also aware of the fact that the banks will calculate a minimum of five years total cost of ownership (TCO) before they make any decision about a vendor and its solution.  If the system requires a large number of IT resources, it will reflect negatively in the TCO spreadsheet.  They thus would not recommend the banks to hire the adequate sized IT team.  Even if they are forthright about it, the banks may not take the suggestion that seriously, and try to bargain this number.  Also, banks are not that big on IT and the trends in software, such as Agile Methodology and other object oriented frameworks.  They thus cannot, or find it hard to appreciate the roles of user experience, documentation, quality control and quality assurance engineers.  Ironic, isn’t it?  During negotiations with the application vendors, the same bank managers vouch to create their own IT team to manage the solution during and after implementation.  They ask the vendors to train their teams on the technology and features  so that the bank could reduce dependence on the external vendor.  Yet they do not hire enough people.

In my experience, I haven’t even heard about a single bank going for ISO 9001:2000 compliance.  But don’t be misaken here: a number of banks are very active on IT security and service delivery.  While this is a good trend that shows that the IT managers of the banks are waking up to the security threats and service delivery, they are not aware of the software development advancements.

Lack of Experienced Project Managers
A bank changes its core banking system once in decades.  Same goe s for any other IT solution that a bank purchses, such as credits, risk management, middleware and the likes.  Also, such projects are so large that it is difficult to find project managers from the market with experience of similar sized projects.  The project manager is expected to provide insight, skill, communication and foresight to steer the project in the right direction.  While a project manager can bring skill and communication, if he/she is not from the banking arena, the foresight, risk management and insight will be missing.  A newly inducted project manager will also face issues in taking control of the things because he/she is a new entrant.

Missing Comprehensive Training Programs
This problem hits the implementers and the banks, both, and equally hard.  The way banks and implementers lose their IT manpower, there is almost always a dire need of hiring people “yesterday.”  New bugs are being pointed out, un-used or un-tested features are brought to operations, requirements are evolving, and new faces are being introduced in the team as older ones are leaving or have left already.  Continuous trainings of fresh employees must take high priority in such cases.  We have seen that this doesn’t happen.  Batches are trained, no doubt, but not in the way they should be.  Application vendors and implementers are too stretched to move their consultants from clients to training labs.  With banks pressing hard upon delivery and expanding their feature-set at the same time, and the implementing engineers/consultants taking other opportunities, the implementation managers do not do what they must: training, training and more training.

I haven’t seen many banks with QA people, honestly.  And even those who have QA personnel, have up to two people.  How can two people possibly do a quality check of the whole banking system?  Furthermore, it is highly likely that QC people are picked up from the general IT industry, which means that they may not have the experience of the banking environment and applications.  They may not know or understand fully the requirements of a user.  Also, it is possible that the QC personnel are hired after the requirements analysis is complete.  By that time, it is too late.  I am not contending the human ability to learn and act; the QC engineers can still pick things up, but that is only a subset of what they should, and in a very short time span.  Ultimately, you would see QC people checking the application for crashes and interface related errors, mostly.

No Stress or Automated Testing
A banking application is supposed to take a large number of users.  Add to it the users of Internet Banking, and you see a dramatic number within the usual eight business hours.  Since the applications are built with older or proprietary technologies, the choice of tools to stress and load test the applications is very slim. If the application has a web interface, or if it is developed using Oracle Developer, then you have a few options. But because of the tough timeline, and no time for testing, load testing also stays under the radar. Realizing late in the process, banks find it convenient to purchase more hardware horsepower. There is hardly any calculation behind the assessment of the required hardware resources. Thus the limits of the available hardware are never assessed. Also, not using an automated tool makes the users and QC personnel test each newly developed or fixed feature. And when an update/fix/patch is received from the application vendor, people have to test everything again. This is a huge investment of time and resources, which is avoidable by using an automated testing tool.

For web application, we have used IBM Rational Performance Tester, Rational Functional Tester and WebLoad. Creating scripts requires some programming as well.

To be continued…

This blog is also available at

Pakistan has seen a lot of banks going for new core banking solutions in the recent past. The top scoring companies here are:

  1. Temenos T24 (Seven banks)
  2. Sungard SYMBOLS (Three banks)
  3. Misys (Two banks)

And Pakistan is not the only country in the world going for such solutions.  Banks in the Mid-East have also procured the same core banking solutions to offer a wide range of services to their customers, using the newly built integrated information systems.  Using the legacy systems, this was only a distant dream.  Consumer financing, based on the rapid increase in the number of affluent and the middle-class, forced the banks to opt for better systems and not rely on their current legacy applications – rather an archipelago of applications.

Enter these new applications – or their new versions, and the business folks were loving these apps from the word go.  There are not many vendors and implementers of such solutions, so we can be sure that they would be very very stretched for delivery.  There are tons of things that need to be changed in the basic shrink-wrapped CD/DVD.  For an Islamic country, things like deducting Zakat (annual obligatory charity of 2.5%) are not available, and the systems need to be modified.  Such modifications are done typically by some local vendor, who is implementing the solution at the bank.

And then you start seeing the problems.

Lack of Knowledge about the Proprietary Technology
Implementing people are not aware of the application and its underlying technology; they try to execute things based on what they learnt in college, using C# or Java.  Most of the core banking solutions don’t use any of these tools and platforms.

Lack of Knowledge about the Proprietary System
Implementers were not there when the application was built/engineered.  They thus do not have the idea of things under the hood.  That lack of knowledge poses a significant threat.  Though the application vendor trains its implementation partners, it is obvious that they cannot train a large number of engineers of the implementing company.  Also, the application vendors are stretched themselves for delivery, and cannot spend much time away from implementations and developing next versions/fixes/patches of the applications.

No QA Process
Due to stringent timelines and over-stretched resources, the implementers do not find it worth the while to have a QA process to ensure delivery as per the set standard; supposing that there were any defined standards at all.

Scope Creep
Scope creep occurs from the business users of the banks, as they do not have much idea about the new application, and they are used to the older application.  When they see things happening, that’s the time when they realize that they didn’t want things that way.  That brings us to the next point.

Customizing towards the Previous System
Business users can actually make the new system look, feel and act like the old system.  Worse, they may do it completely unaware.  Therefore, this question must be posed at them time and again, so that they double-check everything before they ask for it.

Unrealistic Timelines by Banks
Banks can set unrealistically ambitious timelines for the project completion.  We have seen projects running into many years of delay due to any of the reasons we’re discussing now.  Somehow each bank thinks that it has assessed the right timeline, and that it can handle things better.  But as we know in the IT world, it is either late or it’s not functioning.

Unrealistic Timelines by Implementers
To beat the competition and win the order, the suppliers of banking solutions also agree with the aggressive and unrealistic timelines – timelines that even they know cannot meet.

Lack of Implementation Resources (Functional and Technical)
Lack of resources hits the suppliers/implementers very hard.  There are a lot of banks going for such solutions; locally as well as internationally.  This brings the workforce exposed to offers from around the world.  In the case of T24, we have seen offers being extended to people who even mention the word T24 in their LinkedIn profiles.  And since the banks in Singapore, Hong Kong and the Mideast pay in USD or AED, people from Pakistan and India find it very attractive to opt for such assignments.  The local employers try to mitigate this risk by forcing the employees to sign three-years bonds with their staff before training them, but this doesn’t help much, as the new employers are happily willing to pay their remaining period’s salary to their current employer; afterall, they have to save their face in front of their client bank(s) who would be paying much higher than a bank in Pakistan would to a local implementer.

Subjugating Conditions on Consultants/Engineers
Consequently, the implementer imposes such restrictive conditions on its people that many feel disgruntled and mistrusted.  They look out for a chance to get out of there, and they usually take that too.

Cash Burnout Due to High Salaries
To counter that brain-drain, the implementer then offers high salaries to keep people on-board.  This salary is close to or equal to what an employer would offer in the Mideast or Far East markets.  This obviously strains the profitability and cash flows of the implementing company, and the management loses interest in the project.  This further fuels the downward spiral of the implementation project.

    To be continued…

    This blog is also available at

    Posted by: imranadeel | January 13, 2009

    Will Satyam Scam Affect Indian IT Business?

    A somewhat passionate debate is going on over the Internet and LinkedIn about Satyam’s scam and its fallout on Indian IT business.  Here’s my response to one such question over LinkedIn.

    I just went through a piece of news from World Bank on Reuters ( They debarred the following Indian companies and their related entities from doing business with World Bank or any of its subsidiaries. 1- Satyam (debarred in September 2008; surprised?) 2- Wipro (debarred in June 2007) 3- MegaSoft Consulting (December 2007) Satyam and Wipro were debarred because of “improper benefits to bank staff”. While Wipro was debarred from June 2007, we haven’t seen much change in Indian IT landscape. This Satyam scandal certainly leaves a big mark, but I don’t think it’s gonna do great deal of damage. There still are large captive centers of major foreign companies in India, such as IBM, Microsoft, Intel, HP, Temenos, Texas Instruments, and many many more. These organizations cannot roll themselves back from India because of such a scandal. And wherever these organizations are in business, the industry just won’t die over there.

    So I think that while some new projects and contracts may go to Brazil, China et al, India will still maintain its position on the IT map.


    Older Posts »



    Get every new post delivered to your Inbox.