This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Why store your data in the Cloud?

21 April 2015

Many manufacturing companies now have many terabytes (TB) of production information to store. Johannes Petrowisch, partner account manager at COPA-DATA, looks at the issues surrounding big data archiving. 

Compliance is one of the main reasons for archiving data on a long-term basis. Traceability and documentation of product history ¬– including information regarding creation, quality and quantity – is vital data that needs to be stored for accountability purposes. The most important ramification of this is that the firm has documented proof of its adherence to legal requirements.

Another reason for archiving data is that knowledge is power! The more data you collate and analyse, the stronger and more accurate future predictions will be. Whether for quality management, predictive maintenance or simply to stay ahead of the innovation curve, there is a strong belief that the more data points archived, the better.

The challenge
Data, big or small, needs to be kept safe and needs to be readily accessible for analysis.

COPA-DATA has identified a recurring trend – the cheaper the storage medium, the worse and more time consuming the re-readability of the data.

One less costly and slightly outdated storage method entails data being moved to an external media, such as magnetic tapes. This makes searching and extrapolating information cumbersome and time consuming and increases the risk of data loss or theft. Storage space is often insufficient for the huge amounts of data that are now being recorded in industrial processes today.

Alternatively, data can be saved in a database. This method is good for simple accessibility of data, but can be costly as large quantities of information would need to be split into separate database shards. This increases running costs, complexity of operations and maintenance, not to mention it runs the risks involved with a single point of failure.

The cloud
Many companies are now looking towards the cloud for their archiving needs. COPA-DATA estimates that updating to a big data cloud solution could reduce a company's TCO (Total Cost of Ownership) for storage by up to 60%. 

When collecting plant data, large quantities need to be archived on a daily basis. The elasticity of the cloud makes it ideal for such scenarios. Cloud based storage is capable of rapidly processing large volumes of unstructured and often heterogeneous data to identify patterns, that in turn can be used to improve business strategies. This can even be done in real-time.
Unsurprisingly, big data environments require big supporting structures. Clusters of servers are used to support the tools necessary to process large volumes of information. The added benefit of cloud storage is that it is already employed on pools of servers, storage and network resources, so can be scaled up and down as needed.

Security in industry is now of paramount importance as companies compile and archive larger amounts of historical data.

Cloud archiving is one of the safest methods of big data storage, ensuring against both theft and loss. Microsoft cloud-integrated StorSimple storage (CiS) and Microsoft Azure, for example, when combined with industrial automation software, such as COPA-DATA’s zenon, forms a secure, ergonomic and dynamic archiving solution.

Loss of data and unauthorised data access are protected by automatic backup, redundancy, disaster recovery and hardware encryption. Another security advantage is that metadata is only saved in the local runtime application.

Conventional native archiving technology, consisting of aggregated archives, dynamic re-readability and trend evaluations ensure that data points are not saved locally on the panel or PC, but on a hardware appliance in the internal network, the CiS. This dynamic storage gateway, with a current capacity of 120 TB per device, guarantees that the data in the Azure cloud storage is moved and safely archived there. 

It makes sense for companies to look to cloud computing for their analytical and archiving needs. For today’s businesses, properly mining and refining data from an artefact into an asset, drives innovation and gives competitive advantages. A cloud-based big data archiving system provides a business with the ability and ease to analyse on a large scale without security and cost worries.


Contact Details and Archive...

Most Viewed Articles...

Print this page | E-mail this page