Parikshit Bhaduri

Parikshit Bhaduri

Parikshit is the Vice President of Development at GreenField Software. He leads the product development team for GFS Crane DCIM and is the lead architect for GFS solutions in Intelligent Infrastructure Management.

Recently, I was part of a panel discussion on "industry of the future". A number of CIOs from manufacturing industry were part of the panel. The topic of discussion was how industries of today will transform in a connected World.

I emphasized on how IOT( Internet of Things) and Industrial IOT will influence the industry of the future:

Sensors everwhere - Each and every device will soon be armoured with sensors sending data in real time. Hence factory floor will be able to adjust to changing situations fast. If a certain point in the assembly line is blocked due to malfunction or slow down, jobs will be scheduled through parallel stations based on sensor data. The end products of the factories of the future will be embedded with sensors, which will ship data back to the factory.  The data will include important information on how the device is functioning; this information will allow manufactures to monitor their products through the lifecycle and send product updates to rectify any issue. I cited the example of Tesla shipping software updates to customers' cars to address issues.

Manufacturer to Services - A manufacturer will have to transform to a service provider. A manufacturer's responsibility ended after a product was shipped from the factory. However, in future products will be connected through sensors and the manufacturer will have to form a continuous bond with the customer. Seemingly mundane product such as washing powder packet will have sensor and will send vital data on how it is used to the manufacturer. This will enable manufacturer to improve the product.

Create a platform - The success of manufacturers of the future will depend a lot on whether they can create a platform for collecting data, analysing data and acting on it. To achieve this competitors will have to collaborate to create a platform. Evidence for this is clearly discernible in Germany, a maufacturing pwerhouse, where the Government has created a consortium called Industrie 4.0 to bring the manufacturers together to innovate in this connected World. As we know Google is manufacturing a car with its own software layer, Apple is creating the entertainment system of new cars. As cars will depend more and more on software, the companies who design and develop the software will hold the key of commercial gains. This shift makes Daimler, BMW and Volkswagon worried.

Ability to innovate - Manufactures will need to innovate faster; they are perceived as legacy. They will have to shake up that perception and encourage innovation by removing all barriers. They will have to adopt the culture of silicon valley companies.

To summarise the factories of today will have to change significantly to adapt to the connected World, where sensors are ubiquitous. Some of the transformations needed are orthogonal to their current practices and business model. Those who make this transformation will ride IOT to success.

 

Metrics in Data Center

Tuesday, 21 July 2015 16:59

 

Introduction

 

Most of us are familiar with the quote that “if you cannot measure you cannot manage”. In all fields, spanning technology and management, a set of metrics are established to measure against stated objectives. The metrics should tell the stakeholders about how the system is performing. The metrics on a business can be from several different perspectives:  financial, customer satisfaction, environmental impact etc. Just one aspect, such as financials, do not tell the whole story. If the board of a company looks at only the financial aspect ignoring other areas, it may be myopic. Today a company may be doing fine from financial metrics such as EPS, revenue, profitability numbers. However, if customer satisfaction index and its brand value due to environmental impact are poor, it doesn’t augur well for the company. Similarly, a data center needs to be viewed from different angles: cost efficiency, power consumption, reliability, customer satisfaction to make the measurement all rounded.

 

PUE – is that the only metric needed in a data center?

 

PUE – Power Usage Effectiveness is the most well-known of all data center metrics. At the core of the data center are the computing units – server, storage, switches, which runs the application, stores the data, and communicates internally/externally.  One of the primary cost of running a data center is the power consumed. The power consumption has two components: power consumed by computing units and power consumed by rest of the facilities equipment such as cooling. The PUE is calculated by dividing the total power consumed by the data center with power consumed by the computing units. The lower the PUE the more efficient the data center is. If the PUE of a data center is 2 it means 50% of the power is used by computing units. Now if we can bring down the total power assuming that the power drawn by computing units remain the same, then we have increased the efficiency by reducing the overhead of such functions as cooling.

 

The importance of PUE cannot be denied and every data center should strive to get it as close to 1 as possible. However, PUE is not the only metric. The data centers have to consider several other metrics. Furthermore, PUE can also be deceptive. For e.g., if one replaces the computing units by something which consumes less power , the total power drawn will be less but PUE will increase. For similar reasons PUE cannot be used to compare data centers. If a data center is running mostly on renewable energy then its impact on environment is marginal even though its PUE may be slightly worse than PUE of comparable data centers running on conventional energy.

 

Reliability and availability

 

A data center not only needs to be efficient from a cost and power perspective, it needs to be reliable and available, considering that most data centers are running business critical applications as more and more applications are hosted on the cloud. No customer will tolerate partial downtime, let alone for the whole data center. Hence the metrics which measure reliability and availability are important. The metrics that measure availability for assets such as MTBF (Mean Time between Failures) and MTTR (Mean Time to Repair) are important and should be measured. The other measure of reliability is the number and category of alarms being raised in the data center and how quickly the alarms are being responded to.

 

Customer Satisfaction

 

A data center needs to be customer centric, gone are the days when a data center ran outside the glare of the core business. Today it is intimately connected with a business whether it is a captive data center or a data center providing facilities for others. A captive data center runs the core business of different LOBs and it needs to respond to the needs of the LOBs. A data center, which provides colocation and hosting services, has to be customer centric in its operations. It has to ensure customer provisioning requests are satisfied and any customer ticket closed with satisfactory SLA. So for data centers, captive or otherwise, compliance with SLA is extremely important and that can be measured by provisioning request or service tickets that fall outside the SLA – percent not meeting SLA. Closely tied with customer satisfaction is the capacity of a data center. As long as the data center has sufficient capacity in terms of power, cooling and resources it will be able to service provisioning request quickly. Hence measuring the capacity at all times is paramount for a data center.

 

Conclusion

 

 I had recently hosted a panel discussion on data center metrics and the panelists pretty much concluded that metrics is extremely important for a data center operations and the metrics need to be viewed for the different areas, as outlined above. Also with the availability of DCIM software from companies such as Greenfield it is easy to capture and view these metrics on a real time basis. Greenfield’s software GFS Crane provides dashboard with key metrics such as PUE, availability, capacity utilization etc. In addition, one can have drill down reports to see a granular view. With automation, provided by such software such as GFS Crane, it is easy to stay on top of things and react with agility as situation changes or take pro-active steps wherever possible.

 

 

Introduction

 

Not unlike Network Management Systems (NMS), Data Center Infrastructure Management (DCIM) software also monitors diverse set of equipment. The equipment ranges from server, network switches, Power Distribution Units (PDUs), panels, sensors, Diesel Generator (DG) sets. These devices have different protocols – MODBUS, SNMP, and BACNET. In addition, the parameters, that are monitored, are also different. For example the monitored parameters from a DG set may be output voltage, output power, and output current for all phases. Now in the case of a sensor it may be temperature and relative humidity.  The software needs to capture the data from the various devices, keep in persistent store and report/alert on the data. This poses a problem if we want to store in traditional row/column format of relational data base. We will explore the implementation options and the method adopted.

 

Implementation Options in RDBMS

 

If we choose to store the monitored data in traditional relational form we have couple of options:

 

  1. Build a super set of column list from all the monitored devices

    If we choose this option then let’s say that we have 3 devices A, B & C and for A the monitored parameters are x, y, for B the monitored parameters are y, z, and for C the monitored parameters are x, z. So if we have a table with columns x, y and z it should suffice. Well in the real world the number of devices can run into hundreds of types with each device having multiple unique parameters. In that case you will see that the number of columns will easily run into few hundreds making the table design unwieldy. Furthermore, when it is populated with data it will be sparse. Of course, every time a new device is added with unique parameter, one will have to add columns to the table making the design untenable.

  2. Have a table per device

    This approach is somewhat better than the previous one - in the design add a table, which is unique to the type of device. For example there will be a table for DG set with columns for parameters that are monitored for a DG set, a table for a sensor with temperature and relative humidity as columns, so on and so forth. It sounds logical. However, this design also suffers from similar deficiencies as stated above. Let us say you have 2 DG sets from two different manufacturers and their monitored parameters, although having overlaps, are not exactly same. So what do we have to do – add two different tables for 2 DG sets? There goes the design principle for a toss!

 

How to retrofit in a RDBMS based solution?

 

Having described the issues that we encountered, how do we design the persistence of monitored data? The natural choice would have been NOSQL databases such as Cassandra or similar persistent store. The NOSQL data model is a dynamic schema, column-oriented data model. This means that, unlike a relational database, you do not need to model all of the columns required by your application up front, as each row is not required to have the same set of columns. Columns and their metadata can be added by your application as they are needed without incurring downtime to your application.Since we had to retrofit the design into an already existing relational schema, we chose have a single column of text (varchar field in RDBMS terminology) sufficiently large to hold the monitored data. However, we devised a scheme such that when we acquire data we say what field it is, what is the unit and what is the value. For example if from a sensor we acquire temperature and relative humidity, the data that is written into the table will be “field = temp, unit = Celsius, value = 22/field = RH, unit = %, value = 50”. Similarly for a generator a data row may be “field = voltage, unit = volt, value = 240/field = power, unit = KW, value = 100”. Both these data points will go into the same column and another column for their unique device id. Having done this we simplified the design, its maintenance and reporting. A separate reporting module which normalizes the data after suitably extracting from the monitored table suffices to do all kind of reporting from each unique device. It is flexible enough to add new devices with its own unique parameters without changing the core tables. This is how we married structure and unstructured data.

 

What does one want to see in a data center layout?

 

When one sees a geographical map of a city, one is interested in the streets, the buildings, the parks, commercial establishments, houses etc. Similarly when a data center operator sees a data center layout he/she is interested in the aisles: 1) Cold aisle – the cool air is blown to the racks from this aisle 2) Rack aisle – a row where racks are placed 3) hot aisle – aisle on the back side of rack where the hot air comes out. In addition to the aisles there are a number of other equipment that are seen in the data centers such as 1) Precision Air Conditioners (PACs) 2) Power Distribution Units ( PDUs) 3) Panels. By looking at a data center layout diagram one must know the current state of the layout – where each asset is, how much is utilized?

Hence broadly the requirements for data center layout are:

1) To depict the aisles - hot, cold and rack aisles

2) To show the equipment on the floor - PDUs, Panels,, PACs

3) To show walls separating adjacent rooms

4) To show entry/exit doors

5) To be able to add/move/delete assets such as Racks, PDUs, Panels, sensors.

Implementation methodology & technology choice

The model and visualization  implementation choices are many. The obvious choice is to use Java technologies such as Javascript and JSP. However, it is ideal to separate the model from visualization. Let us take couple of examples to drive home the point. When we describe a circle it is sufficient to say where the center is and the size of the radius. Now how we draw the circle on the screen is a separate task. Similarly when we describe a room, if we specify its coordinates and which corner some furniture is placed we can form an idea of the room even before it is shown to us pictorially. It is universally accepted principle in computing that model (or data) is separated from the algorithm which deals with the data. If we have a text file which describes the data center and if the text file can be easily edited by hand or program, the code becomes much more flexible to extend and maintain. With that goal in mind we set out to choose the best implementation choice for data center layout. GFS accomplished this separation by using XML, XSLT technologies.  We were also able to overlay the assets with real time monitored data for the assets.The solution is extensible, flexible and can be customized for a particular data center. Yes we did actually customize it for a diesel generator enclosure showing DG sets, buffer tanks, flow meters vindicating that XML, XSLT is the right choice for modelling and visualization of data center layout.


 

 

 

 

 

 

 

 

How does the Demo Work?

  1. Submit your work Email
  2. Receive email with Login Access
  3. Login & Evaluate GFS Crane

Get a FREE Demo Now

For Technical Support, please email support@greenfieldsoft.com

Client Testimonials

Kali Mahapatra
AVP - IT Infrastructure & INFOCOM, ABP
October 29, 2014
As a leading media house in India, we have national presence in print for dailies and periodicals, and new media presence in both television and Internet. As can be well ...
Mr Jeff Klaus
General Manager – Data Center Solutions, Intel Corp
July 28, 2014
We are delighted to be included with the GreenField Software distribution. Intel's innovations in data center solutions can benefit GFS Crane DCIM customers , particularl...
M.D. Agrawal
GM IS, BPCL Refinery
January 30, 2014
GFS Crane is implemented at our Mumbai Refinery Data Center which operates 24x7. Being part of energy-intensive operations, we wanted to save energy and have an environme...

GFS In the News

Show More News