Friday, July 27, 2012

Open Connectivity PROTOCOL

Programmable Logic Controller is a mini computer used to automate manufacturing processes. There are multiple machines like Robots, machines in Packaging lines; machines in assembly lines use PLCs to automate the manufacturing. In the Automotive industry, Vehicle production is divided into Stamping, Assembly and Painting processes. Each of the above processes requires extensive use of PLCs. There should be a mechanism for the communication of process control data to IT system / application. OPC – Open Connectivity provides published industrial standard for system interconnectivity with PLCs.
OPC at a glance
OPC is a collection of software interfaces for exchanging data between IT applications and PLCs.  This software has been designed according to the rules of Microsoft COM technology. OPC Server (COM Server) can provide data about the PLCs. OPC Client (COM client) can access the offered data. Legacy expansion for OPC is OLE for Process Control. You can find more information about OPC at the following link: http://www.opcfoundation.org/
Why is OPC needed?
Proprietary solution from each vendor might require multiple drivers to be installed for the information exchange from PLCs. Multiple device drivers create too many data requests for the same data from multiple devices. Implementation of these drivers cost more money and time. OPC is the popular standard to avoid the necessity of having too many device drivers. OPC Drivers are readily available from different vendors.
OPC Components
·         OPC Data Access (DA): Provides access to real time process data. Using OPC DA, most 
           recent values of temperatures, pressures, etc. can be found.
·         OPC Historical Data Access (HDA): Provide analysis to historical process data, which are
           typically stored in database or Remote terminal unit (RTU)
·         OPC Alarm and Events (A&E) is used to exchange process alarms and events.
·         OPC Data Exchange (DX) defines how OPC Servers exchange data with other OPC
           Servers
OPC Foundation also has developed an OPC compliance test suite to certify different vendors on following OPC standards
Different OPC Servers
Link Master
LinkMaster OPC is provided by Kepware Company. This links the data between OPC Servers. For example, RSLinx (Rockwell Automation) OPC Server might be used for connectivity to Allen Bradley PLCs.  LinkMaster can be used to group and add multiple OPC Servers data. LinkMaster also supports scaling of link data item – means scaling of raw data to be converted into engineering units for OPC Client applications.
RSLinx
Rockwell Automation has developed RSLinx - an OPC Server. Lite version of RSLinx doesn’t contain OPC Server implementation. There are two ways by which applications can be connected to RSLinx: i) OPC Client (DLL) to be loaded on to the client machine to communicate with the server. ii) DDE (Dynamic Data Exchange) client is used to connect to RSLinx DDE Server.
KEPServerEX
KEPServerEX provides interfaces and connectivity for OPC DA, AE and Unified Architecture (UA) as well. Unified architecture was introduced by OPC Foundation to avoid the dependency with Microsoft COM connectivity for Client Server connection. UA provides secure way of accessing information from Client to Server without COM technology. This also supports Windows 7 as the OS.
Siemens SIMATIC NET
SIMATIC NET is the industrial communication offered by Siemens Company. Siemens also provides the whole suite of OPC Interfaces for this product for accessing PLC data.

Tuesday, July 17, 2012

Physcial Machine to Virtual Machine migration Considerations (Solution Architect perspective)

There are many things to be considered while migrating application from physical machine to virtual machine. Four main things to be considered are: CPU, Memory required, I/O read write and Storage consideration. In terms of application perspective there are two major things an application / solution architect needs to consider – namely CPU and Memory requirement on the Virtual machine. In this article let us see things to be considered while calculating CPU and memory required for virtual machine.
CPU Clock Speed
Number of operations performed by CPU (Central Processing Unit) is called CPU clock speed or clock rate. This is usually represented in Hertz. Clock rates are determined based on testing of CPU against standard instruction set. Higher clock rates means CPU is able to perform more instructions within a specified time period.
While migrating application from physical machine to virtual machine, it is important to consider the clock rate of both hosted server (say X) and to be hosted (virtual machine) physical ESX server (say Y). Virtual CPU needed in ESX server = (X * Average CPU Utilization * Number of CPU Cores) / Y.
Are you all set for the calculation of Virtual CPU and Memory required? Is the formula for calculating Virtual CPU sizing correct? Of course it is correct; provided both the physical servers are of the same capacity. What do I mean? Assume the application is currently hosted out of IBM x 235 series server. Specification for this is: CPU Speed 2.67 GHz L2 Cache, 266 MHZ DDR (double data rate) with dual channel memory controller.  
Assume ESX server is going to be hosted out of HP DL 380 G7 – Xenon Processor E5606. Specification for this server is: CPU Speed 2.13 GHz 4-core, L3 Cache, 800 MHz DDR3 with 3 channels controller.
Can we use the above formula to find out the number of Virtual CPUs to be hosted out the new server? The above formula needs to be used with caution. Though the CPU speed of the new server looks less than that of the old server, the new server has more transfer rate to memory compare to the old server. Bits Transfer rate from CPU to Memory is much higher in case of second server. Why? Second server uses QPI (Quick Patch Interconnect) which transfers only memory request. QPI is much faster than FSB which transfers both memory and I/O Requests.
CPU Cache
CPU cache is a cache used by CPU to reduce the access time of CPU to access RAM. There are three types of Cache:
a.      L1 – Each Core has its own
b.      L2 -  Usually shared by some or all cores
c.       L3 – Usually shared by all cores.
More data in the CPU Cache increases the speed of the CPU operation. Direct comparison of CPU Speed between different CPU models may not be fully valid without considering CPU Cache.
In the above example, HP Server provides 8MB Level 3 Cache, which means CPU Operations will be faster if the data to be accessed are already cached in L3 Cache.
DDR3 Memory
Compared DDR2 Memory (IBM 235 machine), DDR3 Memory (HP DL 380) is faster two times. HP server offers 3 channels 800MHz Memory. That means 800 * 3 * 2 = 4.8 G Transfer of bits / second, which is much faster than what IBM machine provides. So direct comparison as in the following formula may not be fully valid:
Required Memory size needed on the Virtual machine = Amount of RAM * Average % Memory Utilization
Conclusion
Virtual machine configuration with respect to CPU and Memory needs to be done with more due diligence. When doing the virtual machine sizing, one needs to take into account of CPU Speed, memory access speed, CPU Cache, type of DDR memory used and memory bus speed (QPI or FSB).

Tuesday, July 3, 2012


Cloud Computing for Business Users

Cloud Computing
Nowadays everyone talks about cloud computing and virtualization. I am just curious on how I could explain Cloud computing to a business user – the outcome is this blog. Cloud computing revolves around three things namely; software as a service (SAAS), platform as a service (PAAS) and infrastructure as a service (IAAS).
What is Cloud Computing?
Cloud computing means services are offered over internet. It could be any of the three services: Software, Hardware or a platform. Typically these services will be provided based on the customer demand. Consumers requesting these services can use these services as little as possible. The whole cloud will be managed by a Third Party Vendor.
Types of Clouds
There are types of clouds: Private Cloud – This cloud is maintained on a private network in a datacenter catering to set of specified customers. Public Cloud – This cloud sells services to public internet. Google apps and Amazon Web services are examples of public cloud.
Traditional way of hosting application involves identifying servers, storage and network procurement of those infrastructure for the application. Instead of procuring servers, storage, Database and networks, IAAS allows customer to use the resources available on the cloud. Since these infrastructure services are offered based on the usage, customer has the flexibility to scale up and scale down when required. Amazon Elastic Cloud Computing (EC2) provides web services to compute the capacity in the cloud. Developers can scale up / down the environment based on the requirement using these web services. Amazon also allows automatic scaling according to the conditions defined.  This is especially useful for the applications which need variable usage at different timings. High memory instances up to 68 GB memory and high CPU instances up to 8 Virtual cores are possible with this type of services.
SAAS refers to software offered based on demand. Customer need not build the software from scratch or buy that for a huge sum of money. Instead, the software is available based on the usage and on demand. Vendor may collect a monthly fee or yearly fee for the usage of software. Like how a rented room at hotel serves different people at different days, this software serves multiple customers at different times. This feature is called multi-tenancy.
In the PAAS model, Cloud provides both the infrastructure and software to run their applications. Customer need not worry about the infrastructure to procure or the different software that are required to develop a particular application.  Typical PAAS model is offered by Google Apps. Google apps automatically save the data / work in the cloud.
Opportunities in Cloud Computing
·         Privacy – The cloud provider can monitor the data and usage pattern of the hosted applications. Many customers are not comfortable sharing their data or customer usage pattern. This is one of the improvement / opportunity area where cloud providers need to make progress

·         Security – Cloud computing offers many benefits at the same time it is vulnerable to threats related to security as well. This is another area where cloud providers can concentrate and improve the security of the data.