2018 iPad Pro to rock an improved A12X chip with a faster GPU for more powerful graphics

Apple’s upcoming iPad Pro tablets for 2018 will use an Apple-designed A12X Bionic chip, which is an enhanced version of the regular A12 Bionic chip in the iPhone XS/XS Max/XR smartphones feature a more powerful GPU for faster graphics.

Now that a Chinese regulatory filing has all but confirmed a new iPad-focused Apple media event this month, fresh new details pertaining to the upcoming refresh continue trickling in on an almost daily basis. According to Brazilian iOS developer Guilherme Rambo, the 2018 iPad Pro refresh will include a new Apple-designed A12X Bionic system-on-a-chip.... Read the rest of this post here


"2018 iPad Pro to rock an improved A12X chip with a faster GPU for more powerful graphics" is an article by iDownloadBlog.com.
Make sure to follow us on Twitter, Facebook, and Google+.

Nvidia launches Rapids to help bring GPU acceleration to data analytics

Nvidia, together with partners like IBM, HPE, Oracle, Databricks and others, is launching a new open-source platform for data science and machine learning today. Rapids, as the company is calling it, is all about making it easier for large businesses to use the power of GPUs to quickly analyze massive amounts of data and then […]

Nvidia, together with partners like IBM, HPE, Oracle, Databricks and others, is launching a new open-source platform for data science and machine learning today. Rapids, as the company is calling it, is all about making it easier for large businesses to use the power of GPUs to quickly analyze massive amounts of data and then use that to build machine learning models.

“Businesses are increasingly data-driven,” Nvidia’s VP of Accelerated Computing Ian Buck told me. “They sense the market and the environment and the behavior and operations of their business through the data they’ve collected. We’ve just come through a decade of big data and the output of that data is using analytics and AI. But most it is still using traditional machine learning to recognize complex patterns, detect changes and make predictions that directly impact their bottom line.”

The idea behind Rapids then is to work with the existing popular open-source libraries and platforms that data scientists use today and accelerate them using GPUs. Rapids integrates with these libraries to provide accelerated analytics, machine learning and — in the future — visualization.

Rapids is based on Python, Buck noted; it has interfaces that are similar to Pandas and Scikit, two very popular machine learning and data analysis libraries, and it’s based on Apache Arrow for in-memory database processing. It can scale from a single GPU to multiple notes and IBM notes that the platform can achieve improvements of up to 50x for some specific use cases when compared to running the same algorithms on CPUs (though that’s not all that surprising, given what we’ve seen from other GPU-accelerated workloads in the past).

Buck noted that Rapids is the result of a multi-year effort to develop a rich enough set of libraries and algorithms, get them running well on GPUs and build the relationships with the open-source projects involved.

“It’s designed to accelerate data science end-to-end,” Buck explained. “From the data prep to machine learning and for those who want to take the next step, deep learning. Through Arrow, Spark users can easily move data into the Rapids platform for acceleration.”

Indeed, Spark is surely going to be one of the major use cases here, so it’s no wonder that Databricks, the company founded by the team behind Spark, is one of the early partners.

“We have multiple ongoing projects to integrate Spark better with native accelerators, including Apache Arrow support and GPU scheduling with Project Hydrogen,” said Spark founder Matei Zaharia in today’s announcement. “We believe that RAPIDS is an exciting new opportunity to scale our customers’ data science and AI workloads.”

Nvidia is also working with Anaconda, BlazingDB, PyData, Quansight and scikit-learn, as well as Wes McKinney, the head of Ursa Labs and the creator of Apache Arrow and Pandas.

Another partner is IBM, which plans to bring Rapids support to many of its services and platforms, including its PowerAI tools for running data science and AI workloads on GPU-accelerated Power9 servers, IBM Watson Studio and Watson Machine Learning and the IBM Cloud with its GPU-enabled machines. “At IBM, we’re very interested in anything that enables higher performance, better business outcomes for data science and machine learning — and we think Nvidia has something very unique here,” Rob Thomas, the GM of IBM Analytics told me.

“The main benefit to the community is that through an entirely free and open-source set of libraries that are directly compatible with the existing algorithms and subroutines that their used to — they now get access to GPU-accelerated versions of them,” Buck said. He also stressed that Rapids isn’t trying to compete with existing machine learning solutions. “Part of the reason why Rapids is open source is so that you can easily incorporate those machine learning subroutines into their software and get the benefits of it.”

Manually set your Mac’s cooling fan speeds with Macs Fan Control

You can manually configure your Mac’s fan speeds with a useful free utility called Macs Fan Speed. We show you how to use it.

If you own an Apple computer, especially a modern one, then you’ve probably come to notice how particularly thin these machines have become. Despite that, most Macs still sport internal cooling fans to keep the CPU and GPU temperatures in check.

By default, Apple’s internal cooling fans run as silently as possible for a quiet user experience, but this isn’t without its caveats. Thinner machines like the MacBook Pro are more susceptible to heat soak because the cooling capabilities of such a compact chassis are limited; this is something you’ve undoubtedly felt while the machine sits on your lap during intensive tasks.... Read the rest of this post here


"Manually set your Mac’s cooling fan speeds with Macs Fan Control" is an article by iDownloadBlog.com.
Make sure to follow us on Twitter, Facebook, and Google+.

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the […]

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Google Cloud gets support for Nvidia’s Tesla P4 inferencing accelerators

These days, no cloud platform is complete without support for GPUs. There’s no other way to support modern high-performance and machine learning workloads without them, after all. Often, the focus of these offerings is on building machine learning models, but today, Google is launching support for the Nvidia P4 accelerator, which focuses specifically on inferencing […]

These days, no cloud platform is complete without support for GPUs. There’s no other way to support modern high-performance and machine learning workloads without them, after all. Often, the focus of these offerings is on building machine learning models, but today, Google is launching support for the Nvidia P4 accelerator, which focuses specifically on inferencing to help developers run their existing models faster.

In addition to these machine learning workloads, Google Cloud users can also use the GPUs for running remote display applications that need a fast graphics card. To do this, the GPUs support Nvidia Grid, the company’s system for making server-side graphics more responsive for users who log in to remote desktops.

Since the P4s come with 8GB of DDR5 memory and can handle up to 22 tera-operations per second for integer operations, these cards can handle pretty much anything you throw at them. And since buying one will set you back at least $2,200, if not more, renting them by the hour may not be the worst idea.

On the Google Cloud, the P4 will cost $0.60 per hour with standard pricing and $0.21 per hour if you’re comfortable with running a preemptible GPU. That’s significantly lower than Google’s prices for the P100 and V100 GPUs, though we’re talking about different use cases here, too.

The new GPUs are now available in us-central1 (Iowa), us-east4 (N. Virginia), Montreal (northamerica-northeast1) and europe-west4 (Netherlands), with more regions coming soon.