Risk Assessment

Risk assessment is a vital component in BC (Business Continuity) planning. Through risk assessment, your company may determine what vulnerabilities your assets possess. Not only that, you’ll also be able to quantify the loss of value of each asset against a specific threat. That way, you can rank them so that assets that are most likely to cripple your business when say a specific disaster strikes can be given top priority.

However, a poorly implemented risk assessment may also cost you unnecessary expenditures. Many risk assessors are too enthusiastic in pointing out risks that, at the end of the assessment, they tend to over-appraise even those having practically zero probability of ever occurring.

We can assure you of a realistic assessment of your assets’ risks and propose cost-effective countermeasures. These are the things we can do:

  • Identify your unsafe practices and propose the best alternatives.
  • Perform qualitative risk assessment if you want fast results and lesser interruptions on your operations.
  • Perform quantitative risk assessment if you want the most accurate depiction of your risks and the corresponding justifiable costs of each.
  • Conduct frequency and consequence analysis to identify unforeseen harmful events and determine their effects to various components of your organisation and its surroundings.

We can also assist you with the following:

Check our similar posts

How DevOps oils the Value Chain

DevOps ? a clipped compound of development and operations – is a way of working whereby software developers are in a team with project beneficiaries. A client centred approach extends the project plan to include the life cycle of the product or service, for which the software is developed.

We can then no longer speak of a software project for say Joe?s Accounting App. The software has no intrinsic value of its own. It follows that the software engineers are building an accounting app product. This is a small, crucially important distinction, because they are no longer in a silo with different business interests.

To take the analogy further, the developers are no longer contractors possibly trying to stretch out the process. They are members of Joe?s accounting company, and they are just as keen to get to market fast as Joe is to start earning income. DevOps uses this synergy to achieve the overarching business goal.

A Brief Introduction to OpsDev

You can skip this section if you already read this article. If not then you need to know that DevOps is a culture, not a working method. The three ?members? are the software developers, the beneficiaries, and a quality control mechanism. The developers break their task into smaller chunks instead of releasing the code to quality control as a single batch. As a result, the review process happens contiguously along these simplified lines.

Code QC Test ? ? ?
? Code QC Test ? ?
? ? Code QC Test ?
? ? ? Code QC Test
Colour Key Developers Quality Control Beneficiary

This is a marked improvement over the previously cumbersome method below.

Write the Code ? Test the Code ? Use the Code
? Evaluate, Schedule for Next Review ?

Working quickly and releasing smaller amounts of code means the OpsDev team learns quickly from mistakes, and should come to product release ahead of any competitor using the older, more linear method. The shared method of working releases huge resources in terms of user experience and in-line QC practices. Instead of being in a silo working on its own, development finds it has a richer brief and more support from being ?on the same side of the organisation?.

The Key Role that Application Program Interfaces Play

Application Program Interfaces, or API?s for short, are building blocks for software applications. Using proprietary software-bridges speeds this process up. A good example would be the PayPal applications that we find on so many websites today. API?s are not just for commercial sites, and they can reduce costs and improve efficiency considerably.

The following diagram courtesy of TIBCO illustrates how second-party applications integrate with PayPal architecture via an API fa?ade.

Working quickly and releasing smaller amounts of code means the OpsDev team learns quickly from mistakes, and should come to product release ahead of any competitor using the older, more linear method. The shared method of working releases huge resources in terms of user experience and in-line QC practices. Instead of being in a silo working on its own, development finds it has a richer brief and more support from being ?on the same side of the organisation?.

imgd2.jpg

The DevOps Revolution Continues ?

We close with some important insights from an interview with Jim Stoneham. He was general manager of the Yahoo Communities business unit, at the time Flickr became a part. ?Flickr was a codebase,? Jim recalls, ?that evolved to operate at high scale over 7 years – and continuing to scale while adding and refining features was no small challenge. During this transition, it was a huge advantage that there was such an integrated dev and ops team?

The ?maturity model? as engineers refer to DevOps status currently, enables developers to learn faster, and deploy upgrades ahead of their competitors. This means the client reaches and exceeds break-even sooner. DevOps lubricates the value chain so companies add value to a product faster. One reason it worked so well with Flickr, was the immense trust between Dev and Ops, and that is a lesson we should learn.

?We transformed from a team of employees to a team of owners. When you move at that speed, and are looking at the numbers and the results daily, your investment level radically changes. This just can’t happen in teams that release quarterly, and it’s difficult even with monthly cycles.? (Jim Stoneham)

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK
How Mid-South Metallurgical cut Energy Use by 22%

Mid-South in Murfreesboro, Tennessee operates a high-energy plant providing precision heat treatments for high-speed tools – and also metal annealing and straightening services. This was a great business to be in before the energy crisis struck. That was about the same time the 2009 recession arrived. In no time at all the market was down 30%.

Investors had a pile of capital sunk into Mid-South?s three facilities spread across 21,000 square feet (2,000 square meters) of enclosed space. Within them, a number of twenty-five horsepower compressors plus a variety of electric, vacuum and atmospheric furnaces pumped out heat 27/7, 52 weeks a year. After the company called in the U.S. Department of Energy for assistance, several possibilities presented.

Insulate the Barium Chloride Salt Baths

The barium chloride salt baths used in the heat treatment process and operating at 1600?F (870?C) were a natural choice, since they could not be cooled below 1200?F (650?C) when out of use without hardening the barium chloride and clogging up the system. The amount of energy taken to prevent this came down considerably after they covered and insulated them. The recurring annual electricity saving was $53,000.

Manage Electrical Demand & Power

The utility delivers 480 volts of power to the three plants that between them consume between 825- and 875-kilowatt hours depending on the season. Prior to the energy crisis Mid-South Metallurgical regarded this level of consumption as a given. Following on the Department of Energy survey the company replaced the laminar flow burner tips with cyclonic burner ones, and implemented a number of other modifications to enhance thermal efficiency further. The overall natural gas reduction was 20%.

Implement Large Scale Site Lighting Upgrade

The 24/7 nature of the business makes lighting costs a significant factor. Prior to the energy upgrade this came from 44 older-type 400-watt metal halide fixtures. By replacing these with 88 x 8-foot (2.5 meter) fluorescent fittings Mid-South lowered maintenance and operating costs by 52%

The Mid-South Metallurgical Trophy Cabinet

These three improvements cut energy use by 22%, reduced peak electrical demand by 21% and brought total energy costs down 18%. Mid-South continues to monitor energy consumption at each strategic point, as it continues to seek out even greater energy efficiency in conjunction with its people.

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK
8 Reasons why you Need to Undertake Technical and Application Assessments

Are your information assets enabling you to operate more cost-effectively or are they just drawing in more risks than you are actually aware of? Obviously, you now need to get a better picture of those assets to see if your IT investments are giving you the benefits you were expecting and to help you identify areas where improvements should be made.

The best way to get the answers to those questions is through technical and application assessments. In this post, we?ll identify 8 good reasons why it is now imperative to undertake such assessments.

1. Address known issues – Perhaps the most common reason that drives companies to undertake a technology/application assessment is to identify the causes of existing issues such as those related to data accessibility, hardware and software scalability, and performance.

2. Cut down liabilities and risks – Unless you know what and where the risks are, there is no way you can implement an appropriate risk mitigation strategy. A technology and application assessment will enable you to thoroughly test and examine your information systems to see where your business-critical areas and points of failure are and subsequently allow you to act on them.

3. Discover emerging risks – Some risks may not yet be as threatening as others. But it would certainly be reassuring to be aware if any exist. That way, you can either nip them in the bud or keep them monitored.

4. Comply with regulations – Regulations like SOX require you to establish adequate internal controls to achieve compliance. Other regulations call for the protection of personally identifiable information. Assessments will help you pinpoint processes that lack controls, identify data that need protection, and areas that don’t meet regulatory requirements. This will enable you to act accordingly and keep your company away from tedious, time-consuming and costly sanctions.

5. Enhance performance – Poor performance is not always caused by an ageing hardware or an overloaded infrastructure. Sometimes, the culprits are: unsuitable configuration settings, inappropriate security policies, or misplaced business logic. A well-executed assessment can provide enough information that would lead to a more cost-effective action plan and help you avoid an expensive but useless purchase.

6. Improve interoperability – Disparate technologies working completely separate from each other may be preventing you from realising the maximum potential of your entire IT ecosystem. If you can examine your IT systems, you may be able to discover ways to make them interoperate and in turn harness untapped capabilities of already existing assets.

7. Ensure alignment of IT with business goals – An important factor in achieving IT governance is the proper alignment of IT with business goals. IT processes need to be assessed regularly to ensure that this alignment continues to exist. If it does not, then necessary adjustments can be made.

8. Provide assurance to customers and investors – Escalating cases of data breaches and identity theft are making customers and investors more conscious with a company?s capability of preserving the confidentiality of sensitive information. By conducting regular assessments, you can show your customers and investors concrete steps for keeping sensitive information confidential.

Ready to work with Denizon?