Are Master Data Management and Hadoop a Good Match?

Master Data is the critical electronic information about the company we cannot afford to lose. Accordingly, we should sanitise it, look after it, and store it safely in several separate places that are independent of each other. The advent of Big Data introduced the current era of huge repositories ?in the clouds?. They are not, of course but at least they are remote. This short article includes a discussion about Hadoop, and whether this is a good platform to back up your Master Data.

About Hadoop

Hadoop is an open-source Apache software framework built on the assumption that hardware failure is so common that backups are unavoidable. It comprises a storage area and a management part that distributes the data to smaller nodes where it processes faster and more efficiently. Prominent users include Yahoo! and Facebook. In fact more than half Fortune 50 companies were using Hadoop in 2013.

Hadoop – initially launched in December 2011 ? has survived its baptism of fire and became a respected, reliable option. But is this something the average business owner can tackle on their own? Bear in mind that open source software generally comes with little implementation support from the vendor.

The Hadoop Strong Suite

  • Free to download, use and contribute to
  • Everything you need ?in the box? to get started
  • Distributed across multiple fire-walled computers
  • Fast processing of data held in efficient cluster nodes
  • Massive scaleable storage you are unlikely to run out of

Practical Constraints

There is more to Hadoop than writing to WordPress. The most straightforward solutions are uploading using Java commands, obtaining an interface mechanism, or using third party vendor connectors such as ACCESS or SAS. The system does not replace the need for IT support, although it is cheap and exceptionally powerful.

The Not-Free Safer Option

Smaller companies without in-depth in-house support are wise to engage with a technical intermediary. There are companies providing commercial implementations followed by support. Microsoft, Amazon and Google among others all have commercial versions in their catalogues, and support teams at the end of the line.

Check our similar posts

Why DevOps Matters: Things You Need to Know

DevOps creates an agile relationship between system development and operating departments, so the two collaborate in providing results that are technically effective, and work well for customers and users. This is an improvement over the traditional model where development delivers a complete design ? and then spends weeks and even months afterwards, fixing client side problems that should never have occurred.
Writing for Tech Radar Nigel Wilson explains why it is important to roll out innovation quickly to leverage advantage. This implies the need for a flexible organisation capable of thinking on its feet and forming matrix-based project teams to ensure that development is reliable and cost effective.
Skirmishes in Boardrooms
This cooperative approach runs counter to traditional silo thinking, where Operations does not understand Development, while Development treats the former as problem children. This is a natural outcome of team-centred psychology. It is also the reason why different functions pull up drawbridges at the entrance to their silos. This situation needs managing before it corrodes organization effectiveness. DevOps aims to cut through this spider web of conflict and produce faster results.

The Seeds of Collaboration

Social and personal relationships work best when the strengths of each party compensate the deficiencies of the other. In the case of development and operations, development lacks full understanding of the daily practicalities operating staff face. Conversely, operations lacks ? and should lack knowledge of the nuances of digital automation, for the very reason it is not their business.
DevOps straddles the gap between these silos by building bridges towards a co-operative way of thinking, in which matrix-teams work together to define a problem, translate it into needs and spec the system to resolve these. It is more a culture than a method. Behavioural change naturally leads to contiguous delivery and ongoing deployment. Needless to say only the very best need apply for the roles of client representative, functional tester and developer lead.

Is DevOps Worth the Pain of Change?

Breaking down silos encroaches on individual managers? turf. We should only automate to improve quality and save money. These savings often distil into organisational change. The matrix team may find itself in the middle of a catfight. Despite the pain associated with change resistance, DevOps more than pays its way in terms of benefits gained. We close by considering what these advantages are.

An Agile Matrix Structure ? Technical innovation is happening at a blistering rate. The IT industry can no longer afford to churn out inferior designs that take longer to fix than to create. We cannot afford to allow office politics to stand in the way of progress. Silos and team builds are custodians of routine and that does not sit well with development.

An Integrated Organization ? DevOps not only delivers operational systems faster through contiguous testing. It also creates an environment whereby cross-border teams work together towards achieving a shared objective. When development understands the challenges that operations faces ? and operations understands the technical limiters – a new perspective emerges of ?we are in this together?.

The Final Word ? With understanding of human dynamics pocketed, a DevOps project may be easier to commission than you first think. The traditional way of doing development – and the waterfall delivery at the end is akin to a two-phase production line, in which liaison is the weakest link and loss of quality inevitable.

DevOps avoids this risk by having parties work side-by-side. We need them both to produce the desired results. This is least until robotics takes over and there is no longer a human element in play.

Failure Mode and Effects Analysis

 

Any business in the manufacturing industry would know that anything can happen in the development stages of the product. And while you can certainly learn from each of these failures and improve the process the next time around, doing so would entail a lot of time and money.
A widely-used procedure in operations management utilised to identify and analyse potential reliability problems while still in the early stages of production is the Failure Mode and Effects Analysis (FMEA).

FMEAs help us focus on and understand the impact of possible process or product risks.

The FMEA method for quality is based largely on the traditional practice of achieving product reliability through comprehensive testing and using techniques such as probabilistic reliability modelling. To give us a better understanding of the process, let’s break it down to its two basic components ? the failure mode and the effects analysis.

Failure mode is defined as the means by which something may fail. It essentially answers the question “What could go wrong?” Failure modes are the potential flaws in a process or product that could have an impact on the end user – the customer.

Effects analysis, on the other hand, is the process by which the consequences of these failures are studied.

With the two aspects taken together, the FMEA can help:

  • Discover the possible risks that can come with a product or process;
  • Plan out courses of action to counter these risks, particularly, those with the highest potential impact; and
  • Monitor the action plan results, with emphasis on how risk was reduced.

Find out more about our Quality Assurance services in the following pages:

Without Desktop Virtualisation, you can’t attain True Business Continuity

Even if you’ve invested on virtualisation, off-site backup, redundancy, data replication, and other related technologies, I?m willing to bet your BC/DR program still lacks an important ingredient. I bet you’ve forgotten about your end users and their desktops.

Picture this. A major disaster strikes your city and brings your entire main site down. No problem. You’ve got all your data backed up on another site. You just need to connect to it and voila! you’ll be back up and running in no time.

Really?

Do you have PCs ready for your employees to use? Do those machines already have the necessary applications for working on your data? If you still have to install them, then that’s going to take a lot of precious time. When your users get a hold of those machines, will they be facing exactly the same interface that they’ve been used to?

If not, more time will be wasted as they try to familiarise themselves. By the time you’re able to declare ?business as usual?, you’ll have lost customer confidence (or even customers themselves), missed business opportunities, and dropped potential earnings.

That’s not going to happen with desktop virtualisation.

The beauty of?virtualisation

Virtualisation in general is a vital component in modern Business Continuity/Disaster Recovery strategies. For instance, by creating multiple copies of virtualised disks and implementing disk redundancy, your operations can continue even if a disk breaks down. Better yet, if you put copies on separate physical servers, then you can likewise continue even if a physical server breaks down.

You can take an even greater step by placing copies of those disks on an entirely separate geographical location so that if a disaster brings your entire main site down, you can still gain access to your data from the other site.

Because you’re essentially just dealing with files and not physical hardware, virtualisation makes the implementation of redundancy less costly, less tedious, greener, and more effective.

But virtualisation, when used for BC/DR, is mostly focused on the server side. As we’ve pointed out earlier in the article, server side BC/DR efforts are not enough. A significant share of business operations are also dependent on the client side.

Desktop virtualisation (DV) is very similar to server virtualisation. It comes with nearly the same kind of benefits too. That means, a virtualised desktop can be copied just like ordinary files. If you have a copy of a desktop, then you can easily use that if the active copy is destroyed.

In fact, if the PC on which the desktop is running becomes incapacitated, you can simply move to another machine, stream or install a copy of the virtualised desktop there, and get back into the action right away. If all your PCs are incapacitated after a disaster, rapid provisioning of your desktops will keep customers and stakeholders from waiting.

In addition to that, DV will enable your user interface to look like the one you had on your previous PC. This particular feature is actually very important to end users. You see, users normally have their own way of organising things on their desktops. The moment you put them in front of a desktop not their own, even if it has the same OS and the same set of applications, they?ll feel disoriented and won’t be able to perform optimally.

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK

Ready to work with Denizon?