Learning the lessons of HPC - Intel IT Center Connect
Data Center Services
Topics:BYODCloudData Center ServicesMobility
Tunnel

Learning the lessons of HPC

Arif Mohamed | Computerworld UK | 29th May 2014

Big Data is a huge growth area in computing today, as businesses grapple with increasing volumes of structured and unstructured data.

But Big Data isn’t really a new phenomenon. Scientific and engineering industries have been recording and digesting large, fast-flowing and diverse datasets for decades, using high-end processing platforms and tools.

Consequently, enterprises have a lot to learn from both High Performance Computing (HPC) and High Performance Technical Computing (HPTC): the forerunners of today’s Big Data technologies.

HPC and HPTC use high performance computing to solve scientific and technical problems, as opposed to business ones. Applications include computational fluid dynamics and seismic tomography, which is used to image the Earth’s sub-surface characteristics.

So, what can enterprises learn from HPC users?

The first lesson is to introduce higher performance into mainstream computing. This can be done through optimising your software on one hand, and boosting your data centre infrastructure on the other.

You can increase your processing power, your multi-threading capabilities and floating point performance. You can also reduce network complexity and boost storage flexibility by unifying your network and storage fabrics.

The second lesson is to incorporate better mathematical data modelling, which is one of the strengths of HPC/HPTC.

Modelling goes hand-in-hand with simulation, which is the third lesson. Simulation enables you to explore numerous scenarios involving complex data, quickly and cost-effectively.

Fourth is powerful analytics. Apache Hadoop is proving to be a great platform for handling Big Data, particularly when combined with a solution like Cloudera’s Distribution of Hadoop.

Finally, the HPC world has made advances in machine learning, using algorithms that have linear algebra at their core. This is something for the future but worth noting now.

When applied to enterprise Big Data scenarios, learning networks that can process large, parallel jobs and optimise their own performance as they go, will be highly useful to businesses.

It’s time to learn the lessons of the elders and get the best from our Big Data.

To keep up to date with future articles published on the Intel IT Center, along with exclusive access to in-depth guidance, expert insights and a wealth of learning resources on the hottest topics in IT, register here.

Connect to cloud

Windows Server 2012 optimised for Intel Xeon platforms

With Windows Server 2003 support coming to an end in July 2014, organisations should be firmly on the path to migrating to Windows server 2012. Naturally, as with any big software upgrade, it makes sense to put in place the best combination of hardware and software. “To get the best out of an investment, customers […]

openstack-summit

OpenStack users reveal usage trends

OpenStack is growing in popularity as the primary open source IaaS (infrastructure-as-a-service) data centre platform for creating and managing large groups of virtual private servers.

Corporate

Law firm quarters data centre count

International law firm Paul Hastings reduced its data centres from 16 down to four by adopting servers based on the Intel® Xeon processor E5 family.