The last 20 years the computer industry, engineering and design industry have been evolving so much, specially when chip manufacturers were concentrating on high clock speed and more transistors implemented in the processor to finish the job quicker. The computing magnitude drastically had risen to a point, where it needed to change its direction in the beginning of 21st century. Therefore, we started to get the introduction of multi-core and multi-threads support. In order to keep up with the changes, software architecture needed to change as well. At that time, I was still concentrating to speed up rendering time, utilizing any available resources in the network using parallel distributed rendering. But now, everything has changed. Multi-core, multi-threads, parallel computing and even distributed computing have been common things. The birth of GPUs had completely changed the game on how we process the data as well as getting the results almost instantly. And today, it's well known as accelerated computing. THE HISTORY I wouldn't explain all things about the historical development of the computer. But merely on moments where things were changing and giving forces to drive the next historical moments ahead. When Nvidia release their RIVA (Real-time Interactive Video and Animation) accelerator in 1997, it led them to receive a lawsuit from SGI in 1998. However, the settlement done between them made Nvidia got access to things that SGI had developed. When ATI rushed into the graphics market in mid 2000's, the graphics battle between ATI and Nvidia emerged with their products: Radeon vs. GeForce. Although ATI was acquired later on by AMD, both Nvidia and AMD were in head-to-head battle starting late 2000's. Instead of GPU (Graphics Processing Unit) battle, they're now in battle for GP-GPU (General Purpose GPU) market with more advanced GPU processors. The year 2007 was the dawn of the GP-GPU utilization when both companies are delivering massive hudreds of millions of transistors inside each GPU to extend its capabilities to be used in High Performance Computing workloads. However, each company chose different path in providing accelerated HPC through GP-GPU. AMD lays on OpenCL (Open Computing Language) development, while Nvidia focuses on its own proprietary CUDA (Compute Unified Device Architecture). The two delivers the performance they have been promising. HIGH PERFORMANCE COMPUTING The term of High Performance Computing mostly related to high throughput, high bandwidth and multi tasks computing. In some sense, it's like imagining a super computer that calculate millions and millions of instructions and gets even faster along with the developed technology. I personally didn't expect that I would live this long to experience extraordinary computing performance even with the mobile workstation I carry along with me anywhere. The HP Zbook Studio with Quadro M1000M that has been my main gear for many months could deliver more than 1 GFLOPS of floating point performance. That's more than doubling the performance of my old bulky professional graphics Quadro FX 4800 card back in 2009, and the die size has shrunked to just quarter of it. The Operating Systems such as Windows and Linux have been supporting multi core and multi processor systems, making them good platform to provide High Performance Computing, even on desktop form factor. Both OS support 64-bit as well, allowing it to process much larger dataset. For example, the rendering task that needed to be finished in days, can now be done in just hours, minutes or even seconds with proper configuration. On a larger scale deployment, we have been familiar with racks populated with many servers that carry multi processors inside them, providing scalable computing to much larger needs. Most tasks completed in High Performance Computing platform ranging from personal video editing, up to tasks in the following industries:
CLUSTER AND GRID COMPUTING In early time, I personally had both terms cluster computing and grid computing mixed up. It turned both have different meanings and practices. On Cluster Computing, connected computers are working together to perform same task and viewed as a single system. Grid Computing is somehow formed from a collection of computer resources, connected to perform non-interactive workloads from large variety tasks. However, Grid Computing can also be dedicated to perform single task. Both Cluster and Grid Computing could deliver such high performance computing, similar to "super computer" when being scaled up. Even a single workstation with streaming processors GPU could deliver TFLOPS of performance. ACCELERATED COMPUTING It just came cross my mind from my own experience many many years ago when I was using pen plotter to generate production drawings. It took us hours to finish plotting A1 or even A0 paper size of production drawing using pen plotter. It required us to use multiple pens for multiple lineweights. And even different pens for different colors! But now, some changes were made already. The device is no longer called plotter. It's large format printer which could easily do plotting. With multiple inkjet sprayers, it performs in completing the printing in such high speed and complete the A0 printing task in minutes, in full color as well! The manufacturer did excellent job in simplifying the tasks for us, involving many complexities underneath them all, so we could use the printer as simple as possible. However, things are bit different when trying to migrate from serial computing which based on CPUs, to parallel computing using GPUs. There are some codes that require the application to drive the process and different instructions to be able to be assigned to the GPUs and complete the task. When it's being done properly, we would see significant improvements on the performance gain. A quick video of a YouTuber called 8088NET, displaying performance gain between Nvidia Quadro M4000 (Maxwell architecture) and latest P4000 (Pascal architecture) when doing rendering on GPU supported rendering task on Blender version 2.78. The GPU significantly accelerate the rendering process. For bigger needs and requirements, accelerated computing using GPUs could also be applied to much complex and bigger tasks which would lower the TCO (Total Cost of Ownership) as well as operating expenses. Here's another video from Nvidia that explains the benefits when GPUs are being deployed in datacenters. CONCLUSION The technology for accelerated computing is already here. It brings High Performance Computing to the next level and changes how we mostly work with serial computing, and start migrating to parallel computing in order to get the results as fast as possible. Accelerated computing is available widely from just a desktop deployment, up to a large and scalable datacenter. It would obviously benefit from the change to parallel computing, being offered by the GPUs. And more advanced things that would be available from such technology are already on the horizon: Deep Learning and Artificial Intelligence. Many researches, simulations as well as predicted calculations could be performed by using accelerated computing, making it more precise, having deeper analysis and comes up with results much more quicker than it used to be few years back. Another leap for mankind, in order to prepare a better world as a place for our children and their future. In order to know more on how to accelerate your computing tasks and solution, please don't hesitate to contact us using this form. Source: Personal Experience and The Web.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorBimo Adi Prakoso, founder of Sentra Grafika Kompumedia, is an engineering-animation-broadcast industry professional and workstation evangelist. He's been in the Workstation industry since 1996, the era of SGI. Archives
June 2024
Categories |