Unrest in technolgy industry

Lot happened since I wrote my last blog: Steve Job left Apple’s chief executive office; HP came up with radically different business strategy; Facebook is working on a search tool; Google launched Google+; Google bought Motorola Mobility; Google bought IBM’s patents for undisclosed amount; Future of Android after Google’s acquisition of Motorola Mobility appears to be questionable; Microsoft continues to prove its copycat marketing strategy with Mango; It is more than obvious that personal computers migrating to legacy typewriter category; Oracle has the money and may have the desire to buy HP; Scientist found a planet made of diamond; gold deposits in earth were carried by asteroids – keep eye on those asteroid to monitor the gold value in financial market  and few other significant ones.

For each one of these events, I wanted to write a blog but unfortunately after 24th hour of a day, clock is reset for the next day and cycle repeated. And today, due to the heavy rain in Michigan my pre-quarter final cricket game got cancelled and got free few minutes and hence this blog post.

By watching the significant events in recent past, it is evident all the tech dinosaurs are roaming wild to avoid the next big tech asteroid.  In current market, it is a big business puzzle to predict the predator and prey due to growth acceleration rate in terms of consumer adoption in the technology segment. Lots of companies were formed in a garage with a simple idea and some of them were successful by simplifying the original idea and became a giant, and soon, it is worth for billions. Most of the overnight multi-billion dollar companies do not manufacture goods; do not provide a tangible service and hence very vulnerable for next big simpler idea.

Recent trends have been proving the 200 year old economics theory needs fundamental change. The economic value is not only measured by goods and services but by a geo-cosmic fiber created by invisible social network.

Dominant player in the technology company map is rapidly changing and player who is of size of a continent soon becomes unnoticeable island and vice-versa.  All recent activities in the technology segment are all about positioning to the uncharted journey of consumer trends. We wish we have a magic lens to preview the future but we don’t. We understand and agree no tool or process or person can predict the future precisely but people are capable of approximate or define it.

How do we approximate the future?

Keep in mind few concepts.

  • Timing is everything.
  • Learn history and study current.
  • Unleash the core and its dependencies.
  • Connect the dots.

Using the above concepts, how do we approximate the future of disk storage market segment?  It will become a 30-40 page research report to completely answer the question. So, the approach is to scratch the surface to provide an idea how to approximate the future with an example.

There are few dominant players in the storage industry like NetApp, EMC, IBM, Hitachi and etc. Today the storage is based on longitudinal magnetic recording or perpendicular magnetic recording. The limitation of this recording methods are competing requirement of read, write and stability. The size of storage unit using this technology ranges from 130 gb – 1 tb in square inch. The next generation recording will be based on shingled magnetic recording, bit patterned recording or thermally-assisted recording or combination of bit patterned and thermally assisted and over comes the limitation in the current recording methods. The storage unit size will range from 2 – 10 tb in square inch.

All the current, future magnetic and thermal recording is done by set of semiconductors. There are set of companies manufacture those semiconductors that are used in manufacturing the storage units. The trends of semiconductors are based on the advancement in the quantum physics and solid state physics. The trends of the solid state physics can be studied by research made in academia by monitoring the paper published in journals and forums (like carbon based semiconductors are emerging and future electronic devices like iPad will have almost zero mass and can be folded and kept inside a shirt pocket).  Connect all the above to trends of associated technology (quantum computing), problems faced by end users, problem introduced by the associated new technology trends, and etc. Being exhaustive in each step stated above will approximate the future of the storage industry.

All the current tech giants are performing the similar studies to define or approximate the future. They are defining their merger and acquisition strategy, corporate strategy, business strategy, collaborative strategy and competitive strategy. It has become more challenging due to fundamental change in economic valuation and hence the unreset in technology industry.

Time only can answer the actual results of these strategies of technology giants.

I wish I had more time to write less in this post.

Top Level Domains

Currently there are around 300 approved top level domains. The top level domains are .com, .gov, .net, .in, .ca and etc. Top level domain is a part of internet name space to uniquely identify a web site in the internet. For instance, blog.prabasiva.com uniquely identifies a site in internet and top level domain of the web site is .com. 

All internet name space are managed and controlled by ICANN and it is not-for-profit cooperation with participants from all over the world. The main objective is to develop policy on the internet’s unique identifiers.
History was made in this week when ICANN approved an open name space for the top level domain. It means an internet name space can have top level domains like .bank, .school, .college, .university or trademark names like .td, .starbucks, .google, .microsoft or personal names like .gates, .raji, .darshan, .deepak and etc.

There is around 350 page document published by ICANN on guidelines and procurement process to acquire new top level domains.

Pros & Cons analysis:

1. Security++++: Open top level domain name spaces will minimize web site phishing. In the current top level domain, there are lots of website phishing threats. In the current environment, it is easy to create a fake sites that impersonalize the legitmate site.  Retaining the ownership of the top level domain by the corporation completely eliminates the phishing threats.

2.Social Network++:Provides a local platform to establish communities.  Brand based communities will be established within brand site. It may be linked back to existing social networks. As the adoption of the brand based communities grows, the social networking sites like facebook will be devalued.  Potential monopoly threat from facebook will be reduced due to the emerging brand based community sites.

3.Startup+:Provides an opportunities for new start ups. Sites like .cars, .motorcycle, .banks, .autoloan will be owned by domain managers who already exist in the market (like godaddy.com) or new Startup Company can own these domains and sell the domain sub-domains. There will be intense land grab competition to acquire the common names like .cars, .computer, .tablet, .mobile and etc. It also provides an opportunity for new startup to provide full servicing in that space. For instance, if a new company owns .autoloan, they can sell the sub-domain to smaller banks, credit unions and also they can sell auto loan platform to them. It will boost the business cloud offering in the market.  There are numerous startup business opportunties.

1. Cost—–The initial registration cost is $180K and yearly fees is around $25K. There is no gurantee that the application will be approved for all applicants and if the application is rejected the application fee will not be refunded. Cost may be affordable for fortune 100 companies to register their brand. The cost may not be affordable for a startup and it has a very significant risk to reach zero value for the initial capital investment.

2.Historical data—-:.job top level domain was available for last few years. It didn’t make a significant impact to the jobs sites.

3. User Experience–:Pagerank assignment to the existing sites will have an impact if the top level domains are changed. There are technical solution to redirect the existing domains to redirect to a new top level domains.  However, the non technical end user will be confused with new top level domains.

1.Corporation to buy the top level domain to retain their brand identity in the internet.
2. Corporations to buy the top level domains who are in the race to reach the dominance in a market place (like tablet, universal single sign on, and etc) to grab top level immediately.
3. Service providers who are in  dominance in a market place (like autoloan) or close to dominance are to buy the top level domains to minimize or eliminate the future potential market risk.
4. Startups who are getting into the  full service spectrum are to buy the top level domains.

Disaster Recovery & Business Continuity Framework

Disaster Recovery (DR) and business continuity plan (BCP) has become a regulatory requriment in Oil & Gas, Banking & Finance and Energy sectors.  It is a good business practice for any sector.  I prefer to call it as DR and BC framework.  Since the term DR & BCP has been widely accepted, I will loosely use it and interchange with DR & BC framework in my thoughts.

What is not DR & BCP ?
DR & BCP is not about a plan on how to restore a business operation when a disk array failed in a data center or server reboot failed.  That is an outage and incident management process shall be followed, but not a disaster recovery plan.

What is DR & BCP ?
DR & BCP is a framework for speedy restoration of  business operation with most minimum impact  in case of an unforeseen disastrous circumstance.

What is the context for DR & BCP ?
Priority in life quickly changes when circumstance drastically change. It is true if we look back and see the priority we had in last 5, 10, 15 and 20 years. Priority in life changes even when the circumstances changes in its own phase. It changes exponentially fast when the circumstance drastically change.

When drastic changes occurred due to natural disaster in  Haiti, New Zealand and  recently in Japan; observe the priority change in the people’s  life during and immediately afterwards.  DR & BCP is a framework to follow to restore a business operation immediately after those kind of disaster in global level or in local level specific to the business operation.

What are the components of DR & BC framework?

  • Business architecture: Blue print to illustrate functional aspect of core business
  • Technical architecture: Blue print on how technology used to service the core business
  • Building architecture: Various location of business operation, interconnection of various locations, complete blue print of each location including electrical circuits, back up generator, drainages, sewage and etc
  • Risk Management Plan: Risk assessment, risk rating, approach
    • Location: Location of the current operation
    • Various other factors..
  • Restoration plan/procedure: Various scenario planning on restoring the business; sequence of operations, mechanics, process and etc
  • People: Choosen team members who will be running the restoration plan/procedure

As DR & BCP more emphasizes on restoration plan/procedure component and less or no emphasis on the other components laid out. For Banking industry, there is a complete IT booklet available from Federal Financial Instituions Examination Council on BCP.  There are  lots of great ideas from the template/booklet. However, when I pictured to restore the business operation during/after the horrific event, my thought process was looking for the above listed components are the required to restore the business operation. Booklet helps to achieve the certification from the examiner but I feel the team formation and location are also key pieces in the framework which are missing in the template. In general, already DR&BCP means the restoration plan/procedure. Restoration plan/procedure is all about techniques and mechanics to restore the business operations. Rest of the  components are self explanatory and business dependent.  I will amplify  the importance of people and location below.

 People: When a disaster occurs at key locations of business operation, the business operation team need a plan to quickly recover from the disaster; mentally, emotionally, physically and work on the restoring the business operation.  DR & BC plan is a framework to assist to restore the business operation when such circumstance is presented to us by mother nature or by any other means as a surprise.   DR & BCP requires more emphasis on identifying a team who have mental toughness in a very difficult situation who can be calm and can direct the rest of the team to restore the business operation.

We all will remember for ever Mr. Chesly Sullenberger for his heroic display on Hudson river how he  saved hundreds of lives.  We were all honored by having Mr. Sullenberger at 2009 super bowl opening ceremony.  Recently in the New Zealand earth quake, the Christchurch mayor Mr. Bob Parker is another example for a best crisis manager. Mr. J. Radhakrishna who worked for recovery after 2004 tsunami was internationally recognized for his recovery measures and former President Clinton personally met him and asked him to share  his expertise in Washington.

With or with out a DR & BCP, these heroes were calm and directed the rest for a speedy restoration of the business operation with minimum impact. Just having a plan and not having personality like them to execute the DR & BCP will diminish the success rate of DR & BCP.

Who is Mr. Sullenberger or Mr Parker or Mr. J Radhakrishna in your DR & BCP plan?

Location: It is expected to select the locations of key business operation with low probability of natural disaster.  It is not part of execution of DR&BCP plan but it is part of the risk assessment of the DR & BCP. Probability of disaster for key locations are expected to very low but the assessment made during the risk analysis of DR&BCP reveals it is higher, an immediate action on business cost benefit analysis to lower the risk is mandated.  In most cases, the location of key business operation like data center, power generation unit, etc are selected in past; 40-60-80 years ago and there was not a lot of thought put forward to select the location. Generally the business case to move the location to safer location is not justifiable and hence there are dual location to offset the risk. However, there are scenarios where both dual location or more than one location have same or higher probability of disaster. In some scenarios, all the location may have the disaster at the same time frame.  Those are high risk area for the business operation. In the financial sector, these losses have economical impact but where as in energy, oil & gas sector, the impacts are economical and life destructive.  These kind of guideliness must be mandated as part of  the regulatory requirements for key sectors like Oil & Gas and Energy.

Ideally,  probability of disaster of key business operation location must be  a factor while evaluating a service provider or hosting provider for key/mission critical business operation.  The location determines the probability of a natural disaster. Earth quake is one of the major natural disaster with significant impact and tsunami (it is a side effert of earth quake happens in the ocean), hurricane, storm, heavy rain fall and etc. Recent earth quake at New Zealand and most recent in Japan proves that “ring of fire” has very high probability for massive earth quake. It is a key indicator that key business operation like data center should not be in this region.  Even if there is any current business operation like data center in this region,as stated above, consider the business cost benefit analysis to migrate the location to a safer location. My recommendation is to hire a geoloist for a week to pick the ideal solution with low probability.  Ring of fire has high probability of massive earth quake.  Where as there are other areas which has high probability of earth quake (not massive). Even a mild to medium earth quake in a data center will have a major business impact. See the chart in US. All the red spot has veryhigh probability of massive earth quake. Where as green and yellow shaded area also has high probability for a mild -medium earth quake impact. All the white spot in the map has very low probability for very mild earth quake. There are similar charts available for other natural disaster like hurricane,flood and etc. Over lay those maps together and you will see very few places like Michigan which is one of the safest place in US to host data centers.  Also Michigan has a big pool of highly talented labor work force to manage business operation like data center comparing to other safer places in US.

Note: Thank you to Babu Jon for providing constructive feedback to improve the quality of the document.

Google car

Almost ten years ago, my prediction was, in next ten years neuro dynamic programming language will receive more adoption and become a common computer programming language. Variance in my prediction and actual result is significant. Recent announcement of unmanned car from google is very encouraging that my predication will become a reality one day.

In simple words, in machine learning field, there are two types of learning. Supervised learning and un-supervised learning. Unsupervised learning is a type of learning which  makes a  system to learn in dynamically changing  environment.  Neuro dynamic programming language concept is based on unsupervised learning type.  Whereas, supervised learning are widely used in voice recognition (in your voice phone directory at office), face recognition and etc. Techniques like neural network, Bayesian learning are used in supervised learning.

Q-Learning, Dynamical programming are some of the techniques available in unsupervised learning. It is a method used by humans  to learn and the framework is very simple. In a dynamic environment, the sequence of events are random and an action for the random events is taken. A feedback (reward) for the action is received and based on the immediate and long time reward of the action taken for the random event, a weight is assigned to the action. Based on the exploration and exploitation strategy of the system, weight assigned to the action in the past for the random event, the action is repeated if the same or similar random event happens.

In the current computation paradigm, programming logic is deterministic. In the future, deterministic logic is not sufficient in computation.  A car that uses set of techniques to drive itself will be used to learn about that specific car. At any given time, for a given VIN, all the necessary details about the car will be available. It applies to all entities including humans.

My blog review in 2010

There are quite a few interesting blog post I really wanted to post in 2010 but I could not make time for it due to other priorities at work, school and home.  I’m committed to complete the following blog  before 1st qtr 2011. Here are few topics that are partially completed but soon will be posted.

  • Releastic approach to embrace, adopt and implement  cloud computing in banking and finance industry
  • Innovation strategy
  • Mobile Platform strategy
  • Reference architecture for mobile platform rapid application development.

Here is a short summary on how my  blog performanced  in 2010. The following report is automatically created by wordpress.

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads This blog is on fire!.

Crunchy numbers

Featured image

About 3 million people visit the Taj Mahal every year. This blog was viewed about 25,000 times in 2010. If it were the Taj Mahal, it would take about 3 days for that many people to see it.

In 2010, there were 20 new posts, growing the total archive of this blog to 123 posts. There were 22 pictures uploaded, taking up a total of 3mb. That’s about 2 pictures per month.

The busiest day of the year was January 14th with 173 views. The most popular post that day was Different types of architects.

Where did they come from?

The top referring sites in 2010 were linkedin.com, chiefarchitect.squarespace.com, google.com, youtube.com, and google.co.in.

Some visitors came searching, mostly for types of architects, how to become an enterprise architect, enterprise architecture as strategy, it strategy framework, and different types of architects.

Attractions in 2010

These are the posts and pages that got the most views in 2010.


Different types of architects August 2008


How to become an enterprise architect ? August 2008


IT Strategy – General framework June 2008


Software System Architecture definition process August 2008


About me June 2008

Stand up for others by sitting down

I spent sometime this morning to figure out how these four kids form OGL. The back ground bus looked very familiar to me and realized the original of this school bus is in Henry Ford Museum,  Detroit, Michigan, USA.

Rosa Parks who sparked a historic civil rights movement was travelling in this bus. She proved that a person can stand up for others by sitting down.

My sincere thanks to google to make us  remember her today.

Execution Strategy


Application portfolio management drives technology road map definition and hence the first step is to perform application portfolio management. In the application portfolio management, the applications in the landscape are assigned in to four different categories. Members of each categories depends on IT and business strategies and the categories are:

1.Replace, 2.Retain, 3.Sundown and 4.Introduce

Follow the enterprise sundown procedure for applications under sundown. Now, group the applications which are in the categories under replace, retain and introduce. Create an enterprise business process map and align the applications under the business process map. Align all the technologies used for each application. At this point, a holistic high level chart which maps business processes to set of applications and set of applications to technologies will be established. To attain ultimate end results, this chart or map should be kept high level and focus on the major items in both application and technology layer. 

A successful, practical and value added enterprise architect will strike a good balance on capturing key information in this step and avoid details. For instance, there is no focus on items like 2 Mac work stations being used by Marketing to create corporate font or no focus on items like a Lotus Notes system being used by five users in a field location.

 A general approach to map the application landscape categories to technology landscape categories is given below.
Retain: Evaluate the technology used of the retained applications and ensure it meets the technology roadmap principle
Replace: Perform – Build Vs Buy analysis. For build or buy solution – ensure it adhere to technology road map principle. Follow the sun down procedure for the current application
Introduce: Perform – Build Vs buy analysis. For build or buy solution – ensure the solution adhere to technology road map principle.

Basic principle in defining the technology road map is distinguishing differentiating and commodity technologies in the landscape.  The simple idea of categorizing the technology stack into these categories is as follows.

  • Investment  in differentiating technologies
  • Optimize the cost in commodity technologies


Differentiator and commodity depends on type of industry. For instance, for banking and finance industry, the key differentiators are security, compliance, investor confidence and investment management. Whereas, for energy and utility sector, the key differentiators are safety, security, green technologies and energy efficiency.  For any profit organization, the corporate strategy, IT strategy, finance strategy and etc are reflection of competitive strategy. Conceptually the differentiator factors for each industry can be analyzed but competitive strategy of the organization defines the attributes of differentiators within the market sector.

Ideally, the differentiators from competitive strategy transform into corporate strategy and infused into each organization strategy. The strategy alignment ensures unidirectional execution across the entire corporation.

In a recent project scenario I worked on, the following are top seven differentiating and commodity technologies for a company in banking and finance market sector.

The top 7 differentiating technology areas are: 1.Security, 2.Compliance, 3.IT Service Mgmt, 4.Integration, 5.Dataware house, 6.Back office and 7.Private Cloud.

The top 7 commodity technology areas are: 1.Operating System, 2.Application Server, 3.Web Server, 4.Database Server, 5.Desktop, 6.Storage and 7.Collaboration 

The outcome of the technology roadmap is to have a positive net value in terms of cost, flexibility, security, adaptability, agility and sustainability. A solid technology implementation plan is created with strong foundation of principles, alignment with corporate direction, executive sponsorship, and senior leadership endorsement. These are key required qualities to execute a plan successfully.  

Secrets for technology roadmap execution

 Integrate the architecture team part of the technology delivery team.

 Bring the industry best product experts to implement technology road map. Take time to on broad the best expert with great positive attitude and learning aptitude to the architecture team.

 The attitude of the experts plays a major role to implement technology roadmap successfully.Since commodity technologies will be replaced, the current support team of the current commodity technologies will demonstrate strong resistance to introduce and implement the replacement technologies. Industry experts with deep knowledge and great positive attitude will defeat the resistance of current support teams and infuse confidence in them.

Conduct technology forum discussions and participate in all enterprise or departmental town hall meetings and explain the reason and rationale for technology changes to the developers, application managers and illustrate the measurable value of new technology road map.

Bring the market research analyst and research documents to the organization and provide evidence and case studies on the selected technology roadmap’s success stories in the industry. 

Identify low hanging fruits, implement it quickly and celebrate success. 

Ensure the implementation teams understand that failure is not an option.

Ensure the implementation teams understand that there is no fall back plans available in case of failure to implement the new technology.

 Ensure the implementation teams does not look back and second guess all the technology decisions made. At the same time, assign in-house industry experts to the implementation teams to overcome any technical challenges they face. For complex technical problems, engage the premier support team immediately to augment in-house industry experts to uncover and solve complex technical problems.

 Engage the experts and define detail architecture to cover scalability and high availability.

 Finally, engrain “MAKE IT HAPPEN” mindset to the technology implementation teams.

 All the above steps are not just a theory. Above framework was followed and technology roadmap was defined and it was successfully implemented in less than 8 months for a large company in banking and finance sector. we “MADE IT HAPPEN”