The time has finally come. You’ve poured your blood, sweat, and tears into your most recent content piece, and it’s ready to be packaged up and sent to the client to be pushed live. After a few final checks and only…

Read More
12/22/2015 0 Comments

The time has finally come. You’ve poured your blood, sweat, and tears into your most recent content piece, and it’s ready to be packaged up and sent to the client to be pushed live. After a few final checks and only…

Read More
12/15/2015 0 Comments

The time has finally come. You’ve poured your blood, sweat, and tears into your most recent content piece, and it’s ready to be packaged up and sent to the client to be pushed live. After a few final checks and only…

Read More
12/10/2015 0 Comments

The time has finally come. You’ve poured your blood, sweat, and tears into your most recent content piece, and it’s ready to be packaged up and sent to the client to be pushed live. After a few final checks and only…

Read More
12/08/2015 0 Comments

The time has finally come. You’ve poured your blood, sweat, and tears into your most recent content piece, and it’s ready to be packaged up and sent to the client to be pushed live. After a few final checks and only…

Read More
09/28/2015 0 Comments

Not very long ago, Catalogic and International Data Corporation (IDC) co-hosted a webinar about the importance of copy data management (CDM). Topics of the webinar on CDM included:

  • Current trends
  • Challenges involved in CDM
  • How to approach CDM
  • Client requirements and expectations
  • Client results

Laura DuBois, the VP of Storage Research at IDC made her presentation of IDC’s insights into copy data management that they gleaned from years of research and direct experience in dealing with data management.

Here are a few highlights from the webinar to help give you a leg up on the issue of managing your company’s excess copy data.

Data Volumes are growing, but IT spending is Not Keeping Pace

In her presentation, Laura mentions that “data management is becoming more challenging as data is located in a variety of locations and repositories, not just on  premise, but now in public clouds, in software services, and in private clouds as well.”

The increasing number of potential locations for storing backup or copy data vastly increase the complexity of managing said data, not to mention the disaster recovery process which the copy data was ostensibly created for in the first place.

The increased difficulty in managing copy data and the increased stringency of service-level agreements (SLAs) are increasing data management costs at a faster rate than the growth of the average IT budget. This, in turn, is making harder for storage administrators and managers to meet SLAs concerning availability, recovery, and performance.

The Hardware Budget for Storing Copy Data is expected to Top $50 Billion by 2018

According to IDC research that Laura presented in the webinar, the various copies of data created for analytics for data warehousing, reporting, test and dev, archiving, compliance, disaster recovery, and every other application that copy data gets put to use for consume roughly 60% of the total storage capacity, in aggregate, across the industry.

In other words, more than half of your company’s storage hardware budget may be getting used just for the storage of redundant data.

The worst part? Most of these copied data files are completely unnecessary. So many copies of data files are created as a “what if we need this later” contingency by individuals with little or no coordination of copy data management being carried out between individuals.

In some cases, an organization might have as many as 50 copies of the same piece of data, although the industry average according to IDC is closer to 15 to 20 copies of any given piece of data. For basic data backup and disaster recovery tasks, having so many copies is excessive, so there’s definitely room to minimize.

Excessive Copy Data Slows Down Recovery

One of the major drawbacks to having too much copy data is that it slows down backup and recovery tasks. When your company suffers from too much data bloat, meeting your backup windows becomes difficult, if not impossible. The more redundant or unnecessary copy data you have, the harder it will be to meet your backup windows.

In a worst-case scenario, a company might be left with no downtime as everyone moves to an “always on” environment. For companies with too much extraneous backup files, the backup window might be drawn out to taking 50% or more time than it should.

Actual restore and recovery times also suffer when an organization has too many redundant copies of a file. Generally speaking, this increase is proportional to the one incurred by your backup process.

Even worse, storing all of this extraneous data means eating up more storage space and expending more energy for storing data, which translates into wasted IT budget.

The Five Step Solution

In her presentation, Laura highlighted that there are five key steps that companies should start with when addressing CDM challenges:

  1. Define the Business Requirements for Your Copy Data. Each instance of copy data should be there to serve a purpose. Identifying which copies have a purpose and what that purpose is the first step in handling your excess copy data.
  2. Determine the Difference between What Exists and What’s Required. Once you know what your business’ requirements for copy data are, you can take a look at what you have and what’s really needed.
  3. Put Policies in Place to Pull Back from Creating Excess Copies. Eliminate any unnecessary extra copies that are just eating up storage space and slowing down data management tasks.
  4. Assess Opportunities to Use a Single Copy for Multiple Users. If you can repurpose single copies for multiple users, you don’t have to have unique copies for each individual person, which reduces your copy data load.
  5. Put a Monitoring System in Place. You can’t measure what you can’t monitor, and you can’t change what you can’t measure. You have to have visibility of your copy data so that you can enact control measures and rules.

These five simple steps are critical to improving your company’s control of its copy data and saving money on hardware costs for storing the information you need on a day-to-day basis to meet your SLAs and keep operations running smoothly.

Check out the video of the webinar to see Laura’s whole presentation, as well as some points from Catalogic’s own CEO, Ed Walsh, and Steve Kenniston as they discuss how to gain the visibility you need to improve your copy data management efforts.

Read More
07/13/2015 0 Comments

Hey fellow NetApp Admin, see that? That’s the cDOT party!

Jump in! The water is just fine… or is it?

NetApp’s Clustered Data ONTAP (cDOT) is the place to be from a technology and performance point of view. As NetApp administrators quickly become aware, the transition away from 7-mode is not as simple as flip of a switch. NetApp’s CEO Thomas Georgens understands this, “The complexity and duration of clustered ONTAP transitions have implications on several dimensions.”

The technical literature on this transition talks about three main stages: Identify, Design, and Implement. That first step, “Identify”, is by far the most important in determining your data migration plan and costs, and it is by far the most time consuming.

My fellow NetApp Admin, as the migration process begins to take form, you soon will be asking yourself:

  • How do I even begin to identify all the dependencies between NetApp nodes?
  • I have so many SnapMirrors (SM) and SnapVault (SV) interdependencies. Are there “low hanging fruit” that need to be addressed first?
  • What nodes are hidden hubs to SnapMirror and SnapVault relationships?
  • Which nodes are too critical and should be done last?
  • My data migration costs are too high! Where is my stale or unmodified data and how do I avoid moving it?

Catalogic ECX can help you!

As an integrated Copy Data Management platform, Catalogic ECX does what other solutions only dream about: ECX creates an actionable catalog of all the ONTAP objects and the relationships between these objects. See a demo of how ECX’s management, orchestration, and analytics engine.

Here are 3 ways that ECX can help:

#1

We’ve just added NetApp Transition Dependency report (See Figure 1) that will further empower you in your 7-mode to cDOT transition.

This report can help cutting the time, cost, and staff needs for your transition, understanding all your ONTAP relationships is the key.

It provides both a summary and detailed view of all the SnapMirror and SnapVault relationships for your whole environment.

NetApp Transition Dependency

Figure 1

The Summary View (Figure 1) gives all the relationships (with a count) to clearly show the weight of the dependencies between different nodes.

Relationship source and destination for SnapMirror and SnapVault

Figure 2

The Detailed View (Figure 2) further empowers you by delineating exactly the relationships source and destination for SnapMirror and SnapVault.

#2

NetApp Admins know that NetApp’s initial successes were around the NAS functionality. Those who embraced NetApp early on are still heavy users of this functionality, therefore this becomes another area where migrating to cDOT is costly.

Think of it this way. If you were moving to a new house, you wouldn’t want to move 100% of your current house! Moving all the “extra” stuff costs time and money… You first throw away, you donate, you purge as much as possible.

Why aren’t you doing the same with your NetApp files?

With our catalog, we have a set of File Analytics capabilities; one example is our Files By Age report. (Figure 3)

Files by Age

Figure 3

You can find files based on created/accessed/modified date and get a summary view of the date distribution, and also export a list of those files with full path.

ECX helps you decide which data should be moved or deleted.

#3

With our VM workflows deployed ahead of the migration, (Figure 4) there is no need to rework any policies and that it would just automagically work. Less procedures to change, less risk and therefore makes planning less painful.

ECX Snapshot Management automates the creation of copies.

Figure 4

ECX snapshot management for your NetApp and VMware environment (both on NetApp or any other vendor) automates the creation of copies. Because the NetApp workflows work for both 7-mode and cDOT, migrating your processes will be much easier.

In summary, when it’s time to migrate to cDOT, and before jumping into anything, the key is to know what you’re getting into.

Specifically for cDOT, the initial identification phase really sets up the tone for the rest of the project. Catalogic ECX is the key to answering many of those vital questions.

Your next steps should be:

Click to see a demo of Catalogic ECX
Get full featured 30-day trial of ECX.

Go ahead, leverage Catalogic ECX, then be ready to jump in and join the cDOT party!

Read More
05/22/2015 0 Comments

Let’s say you’re driving down the street and you hear a noise coming from underneath the car that ‘sounds like’ you may have some issues w/ your muffler.  As you’re driving, you see two muffler stores, each across the street from one another.  One shop uses the term “radical” on their marquis and the other “analyze”– which shop do you chose?

The ‘radical’ shop will have you drive in and in a matter of moments, you are convinced that you need a new car, and by doing so, you get a new muffler to boot.

The ‘analyze’ shop gets you a cup of coffee, takes your car in, puts it on a lift and runs some diagnostics.  In 15 minutes, when all is said and done, you find out all you really need is a new bracket.

Maybe you do need / want a new car, but it would be good to feel like you had the choice of being able to make the decision about which direction to take.

The reality is, the same goes for your data storage.  Without having the insight to make decisions about what you need, too often IT practitioners are pushed into a new technology that has them ripping out good technology that is already in place that just needs, well maybe just a new bracket.  The reality is, you purchased the storage you have in your environment for some really good reasons.  When you put out your RFP for your storage you had specific criteria in mind.  The vendor that you ended up choosing met all the criteria that your environment needed; performance, scale, storage services (snapshots, thin provisioning, compression, deduplication, …) etc.  So why would you throw all that away (including the hard work that went into defining and selecting your storage platform) in order to gain a new feature?  The urge to “rip and replace” seems particularly pronounced in the realm of copy data management, where traditional storage and backup vendors have not offered a comprehensive solution that allows the IT team to better manage the creation and use of copies of production data for various mission critical use cases including DR and Dev & Tes.(For a good overview of copy data management, see George Crump’s excellent brief, ‘What is Copy Data’)

But before you rip and replace, consider the analysis approach first.  There is inherent value to having a deep understanding of the data in your environment, specifically how it is used and what the true business needs are.  Making decisions without having that insight can be costly and unnecessary.  A big part of good decision-making is knowing what technology is available that can complement your storage rather than replace it.  Putting in a ‘radical’ new solution that merely delivers storage services that you already have will not help you be more successful with your data.  It’s important to note that as data continues to grow, having technologies such as deduplication and/or compression that ‘speak the same language’ is critical to data management success.  Adding another layer of complexity or another format of deduplication in your environment means you will have to consume more processing power and time to ‘un-dedupe’ the data in order to read it from one platform to the next (as one example).  It is clear that the radical approach brings the risk of creating a great deal of unnecessary work, cost and complexity.

Catalogic Software delivers revolutionary technology without the disruption, cost or complexity.  Our copy data management platform, ECX, complements your existing storage stack rather than forcing a replacement (or forcing costly integration of multiple storage platforms).   It allows you to take advantage of the storage you have and enables you to gain better efficiencies out of your data.  The new wave of data management is ‘Copy Data Management’.  You don’t need to buy new hardware, new storage or new storage services in order to take advantage of a copy data solution.  You only need a simple yet powerful copy data software platform to help you gain control over what you already have, and begin leveraging it in ways that were previously not possible.  Let us show you how you can better manage, orchestrate and analyze your copy data, all while helping to reduce storage costs. Learn more here about ECX, or download the new Lab Report from ESG Global.

Read More
02/25/2015 0 Comments

It was a big day at Catalogic Software as we just announced the worldwide general availability of ECX 2.0, our software only intelligent copy management platform. We publicly introduced ECX 2.0 during our beta program, which coincided with NetApp’s Insight conferences in both Las Vegas and Berlin late last year. This gave us the opportunity to demonstrate ECX to over 1,000 people between the two conferences.

Customer adoption of ECX since then has grown rapidly, and we already have dozens of deployments around the globe. As we’ve engaged directly with end users, we’ve been able to recognize some points that resonate most and we’ve since modified the way we describe our value prop from Visibility, Insight and Control to Manage, Orchestrate and Analyze. The reality is, visibility and insight are important pre-requisites, but the ability to manage and orchestrate is what IT needs to get ahead of the data growth challenges, and deliver superior services to their customers. Manageability and orchestration are at the heart of the ECX technology.

Manage & Orchestrate

In presenting the new message to prospects and partners, I am often asked the question, “What is the difference between ‘manage’ and ‘orchestrate’”? It’s a good question and here is how Catalogic Software defines it. There are a number of tools that say they can help you ‘manage’ your data. These tools mostly provide the ability to control basic storage services that you want to apply to your data.  They let you turn on and off features such as storage efficiency options. That is fine, but what storage admins need today is the ability to manage their data throughout its lifecycle.  Sure, we have been talking about ‘data lifecycle management’ for a while now, but it is time to get control of it. When you don’t control it, you end up with many copies of it and that is the main culprit of this geometric data growth we are experiencing. This is where ECX really shines- the ability to truly manage your copy data throughout its lifecycle. Through the interface, ECX lets you manage the full lifecycle of all the snaps you create in your environment, including Snapshots, SnapMirrors and SnapVaults. ECX enables IT to set up both simple and complex Snapshot, SnapVault, and SnapMirror polices that can branch and cascade in any way you can imagine. Typically, IT tries to automate these processes with scripts.  However, scripts, by nature, are very ‘brittle’. The person who created them may know what is going on, but when they break, that person is never around and, in fact, may have left the company. Now these ‘helpful’ scripts have become a troubleshooting nightmare. Once the ECX snap policies are configured, they can be set up to run on a scheduled basis or run ad hoc, and every evening a report is generated and emailed to let you know the success or failure of each snap policy.

Why is this important?

ECX exception reporting allows IT to ensure the snap polices they have created are being properly executed, and, more importantly, where any issue may exist. These reports help IT to diagnose issues with snaps quickly and easily. Additionally, these snaps are used for a myriad of business operations such as data protection, DR, test/dev, and analytics. If the consumer of this data cannot access it when it is needed, the consequences can be devastating. ECX gives IT the peace of mind that their environment is running properly. You cannot get this level of insight into your copy data lifecycle with any other tool or set of tools.

Orchestration is all about leveraging the snaps you have created and at the heart of copy data management. This is very important. Many times, snaps are created with a good reason in mind, but without a catalog or a process to utilize these snaps, what typically happens is the snap is created, quickly forgotten, and then just sits around taking up valuable storage space. Automation and orchestration allow you to utilize the snaps you just created. The bonus is, you can take advantage of these snaps for many use cases, meaning you don’t need multiple snaps for multiple purposes. If you want to use the last snap of the day for both Test/Dev and analytic environments, you can. This is true copy data management- getting the most out of fewer copies of data. At Catalogic, we believe you want or need a copy of your data for many business operations; you just don’t need a hundred copies. To orchestrate your copy data, you simply select a copy you want to use and then select the location where you want to use it. It is that simple. This workflow can be set to run automatically, every day. A good example is Test/Dev. If you want your development organization working with the most recent version of production data, ECX lets you orchestrate a policy that takes the latest nightly snap and mounts it for development so they are ready to use it first thing every morning. This helps a Test/Dev environment evolve into more of a DevOps environment; the direction Development organizations want to go. It also makes the code they are building much more accurate because they are building and testing against the latest and greatest data set.

The features introduced in previous versions of Catalogic ECX are still available. ECX provides the most robust catalog of all your NetApp and VMware data on the market.  The insight it provides is second to none, and helps IT to better understand their environment. That said, if IT really wants to get control, they should take advantage of Catalogic ECX 2.0 to drive more thoughtful data copies or snap policies, and then be able to utilize these copies for many business operations.  In turn this:

  • Reduces storage management costs – one interface for all snap policy creation
  • Reduces storage overhead – fewer wasted copies of data ‘lying around’
  • Reduces troubleshooting costs thru exception-based reporting
  • Drives higher storage utilization
  • Ensures higher accuracy in test/dev and analytic environments

Click here for demo videos to learn more about the power of copy data management.

Read More
02/05/2015 0 Comments

Let us show you around


Data ProtectionData ManagementOpen VM BackupNetApp Ransomware ShieldNetApp File Catalog