Migration steps to a private cloud

The rationale to move towards cost effective private cloud to reduce cost and improve the flexibility to dynamically use the hardware resources and elastically define the infrastructure architecture are defined in the earlier post.   Applications running in the current environment must be migrated to new private cloud to realize the benefit. To illustrate the thought process, the source platform is assumed as a stand-alone HP-UX and AIX platform and target platform is virtualized Intel processor based chassis managed by VmWare ESX cloud operating system.

Migration steps are given..

  1. Check the applications in the landscape uses COTS (Commercial Off The Shelf) product). Verify from the vendor, the COTS product is supported in the VmWare environment.  If COTS are not supported in the VmWare environment, understand the vendor’s product road map and see if there is any VmWare support in the horizon. Explore ASP model for COTS model.  If it is not feasible, then follow the exception process. (An organization should define an exception process to deviate from private cloud)
  2. Study the technology stack of current application landscape. Verify the technology stack can run in the new host operating system (Linux or Windows).  For instance, if an organization runs the application server BEA WebLogic in HP-UX, in this step, verify if BEA web logic can run in Linux or windows.  There are some technology (like Oracle DB) does not support or perform in VmWare cloud operating system. When a technology is not supported, it is an opportunity to consider that technology for migration to a comparable technology with lower cost and risk and higher or equal benefits.
  3. Even when a current commercial technology stack (like IBM WebSphere, IBM DB2) runs in VmWare platform, still perform a risk assessment of running the native technology stack instead of migration of existing technology stack to private cloud. For example, perform a risk assessment comparing IBM Websphere and native application server JBoss.  The net benefits between these technologies may be equal, in most case, there is a significant license cost differences between these technologies.
  4. Study the migration cost from current technology stack to new technology stack in the cloud. For instance, study how the current technology stack (application server, database server) are currently being used. Look for any proprietary module used by applications in the landscape. For instance, IBM provided CICS transaction gateway jar files in older version of websphere. The applications in the landscape may be still using it. Watch for it. CTG jar files are not part of J2EE specification when IBM bundled with their product. Later, Java connectors for CICS are made available. If there are applications in the landscape using more proprietary modules/tools, then migration of application from websphere to JBoss will be a significant effort.  Total cost of migration must be taken in to consideration. It is a not a best practices to run any vendor proprietary technologies in the mission critical application in the landscape.
  5. Repeat step 2,3,4 for all elements in the technology stack. (like database, security access management, data ware house, back office systems and etc)

Business Rationale for a private cloud

In a stand-alone hardware infrastructure, adding an extra memory or CPU to a hardware unit may take 3-5 months time involving various members following various internal processes like purchase request, infrastructure internal process, service provider internal purchase process and etc. The longer lead time and more involvement from various teams are required because of current enterprise hardware design. Most of  IT organization has more stand-alone servers and it limits the flexibility to add additional resources to the servers.
Due to above reasons, during the initiation phase of any project, the lenient hardware capability planning is done.  It is not ideal nor cost efficient.  This problem can be solved by procuring a bigger server and partition it as need arises. The concept has been in use in mainframe platform for last three decades and each partition in mainframe is called LPAR (Logical PARtition). The similar concept proliferated in the Unix world and major Unix operating systems AIX, Sun Solaris and HP-UX supports it now.
Operating systems Aix, Sun Solaris, and HP-UX run on their host RISC processors from IBM, Sun, and HP respectively.  RISC processor are designed for high volume high speed processing  there is a perception within IT leadership team including  CIOs that only the RISC processors are suitable for data center operations.    The cost of virtualization operating system and blade chassis hardware for RISC based server farm are more expensive than CISC (Complex Instruction Set Computer)processor (like Intel Xeon, AMD) based server farms.  CISC processor based servers had the perception that it is meant for a small scale business and not ready for enterprise data center operation.  It has been a challenge to enterprise architects and IT strategist to convince the IT leadership team to re-platform the existing expensive and inflexible stand alone RISC based servers.

Popularity, acceptance, and adoption of social networking platforms like web 2.0 indirectly helped IT leadership team to rethink about their perception on the CISC based servers for data center operations.  The popular social network sites like facebook runs on Linux platform and the number of users in social networking sites exponentially growing.  It is empirical proof that CISC processor based server are ready for high volume and high-speed data center operations.

With current economic climate, IT leadership team in all sectors have the following objectives.

  • Reduce infrastructure cost (Capital Expense, CAPEX)
  • Reduce over all data center operation cost  (Operational Expense, OPEX)
  • Utilize more energy-efficient (for both Green and cost purpose) devices
  • Minimize lost revenue due to down time

The above objective can be met by developing a cost-effective private cloud. It will additionally provide more flexibility and enable the organization to become  more nimble and agile.

How to architect the private cloud?

First and fore most, cloud operating system is required to manage a set of hardware to virtualize it to create virtual image.  There are two types of cloud operating system available.

  1. Hosted Server cloud operating system
  2. Bare-metal server cloud operating system

1. Hosted server cloud operating system requires a host operating system. The cloud operating system runs on top of the host operating system. The devices are directly managed by the host operating system. It is not designed to clone the operating system and create an image. It is primarly dessigned to clone an application. It is not recommended to run the data center operation like data ware house.

2. Bare-metal server cloud operating system has a micro kernel to manage all hardware directly. Generally the micro kernel is developed very effectively. There are open source cloud operating system available. If you are running mission critical  system, it is highly recommended to run under the commercial cloud operating system. VmWare ESX Server is one of the popular commercial cloud operating system. The micro kernel of VmWare ESX  written very effectively and does a great job of managing the resources.

What are the hardware requirements for private cloud?Well, it depends. There are quite a few certified hardware providers supply servers to  run under VmWare ESX Server. All major manufacturers like Dell, HP, IBM, Fujitsu hardware both blade and racks runs under VmWare ESX Server.  Similarly major manufacturers in storage area network, SCSI controllers, RAID controllers, Fiber Channel adapters, Ethernet Nic support VmWare ESX. Select the hardware, memory and storage based on your needs. There are guidelines available to select the hardware units. Those guidelines will be followed based on the need.

What are the target operating systems ?Well, again, it depends on the requirement. It supports both Linux and Windows. It support all major flavor of Linux (like Redhat, Novel SuSE) and Windows server platform. VmWare ESX can support both operating system at the same time.  The size of the image can be selected based on the requirement.

Results

With the private cloud architecture using Intel/AMD processors, all the above objectives are being met.

Cloud Computing Architect

Due to the information over load and power full search engines like google.com, bing.com, the authentic information is freely available for almost any topic ranging from quantum mechanics to cloud computing. An average person can have a decent conversation about any topic with a minimum effort. It is real challenge to determine a real expert in any field in the current information over load.

Experts are not born but they are made. Before an expert becomes an expert they are beginners searching information to get familiar with the topic. In the information over load, a beginner can easily be represented as an expert.

Cloud computing has lots of attention in the current business environment, and IT executives really struggle to differentiate the cloud computing architect with a person JUST knows the right buzz words. A guidelines to differentiate an cloud computing architect and a person JUST knows the right buzz words with basic knowledge about cloud computing.

Cloud computing Architect:

1. In depth understanding of cloud computing tool box –

  • Understands the existence and usage of various technical and business cloud environment
  • Understands the technical and business stack type in each cloud and usage of those stack for cloning
  • Understands each logical and physical unit of the stack (like storage, database, BPMS, OWL, UML, business services like loan origination, consultative service, collection and etc)
  • Understands behind the scene technology (like cloud operating system, virtualization, storage area network, data transfer rate, raid type, data redundancy, disaster recovery plan and etc). Some argue, understanding behind the scene technology is not required for an architect. In my strong opinion, that is the differentiator between an architect (expert or evangelist ) and novice (quick concept exposed person). It helps the expert to pick right solution for the right problem.

2. Enterprise view of the cloud –

  • Various possible integration of cloud solutions
  • Latency between each cloud solutions

3. Solution design –

  • Various possible instantiation of the enterprise view of cloud

4. Solution delivery

  • This is the most important aspect of a cloud computing architect. First 3 area focuses on the various solution design and its components. The solution delivery focuses on solving a business problem using the packaged cloud solution. It is business problem and solution matching exercise. To illustrate the role of an cloud computing architect let me take a very practical simple example. Let us say, a company wants to sell loan (retail or lease)organization as a service to a smaller banks or credit unions.  For this business problem, the solution provider (let us say the company name is FinCo) has to understand the common business process involved in loan organization and customized loan organization for each customer (bank or credit unit) and both common and customized loan organization needs to be implemented using the technology stack like LAMP, Messaging, persistence database and etc. The common loan organization also be imaged for deployment. Common loan organization can be an cloud solution and it is ready to use. When a Finco, get a new customer, it can deploy the common solution in the cloud and made necessary modification to customize the customer needs.Deploying a solution using the loan organization cloud can be done by a sales or presale technical team. The architecture of the loan organization stack in the cloud will be done by the cloud computing architect.