The latest version of the software brings better APIs, more intuitive Python programming.
TensorFlow, the popular open source machine learning library originally developed at Google, has reached its second major release.
Version 2.0 adds better multi-GPU support, improved integration with Keras neural network library, a wider variety of APIs, and a standardized SavedModel file format for easy portability.
The software also introduces eager execution by default – meaning operations are executed immediately as they are called from Python, without building graphs first. A similar approach is practiced by Facebook’s competing PyTorch ML library.
“TensorFlow 2.0 is packed with many great GPU acceleration features, and we can’t wait to see the amazing AI applications the community will create with these updated tools,” said Kari Briski, senior director of accelerated computing software product management at Nvidia.
Free for all
TensorFlow is among the software tools credited with enabling the machine learning revolution. Developed by Google Brain research team for internal use, it was released under the open source Apache License in November 2015.
TensorFlow simplifies the development of machine learning applications and supports deployment to any platform, from supercomputers to smartphones. It is used in a wide variety of scenarios, from university labs to large enterprise data centers.
With TensorFlow, training and inferencing can be programmed in Python, JavaScript or Swift, the programming language created by Apple.
Development of the software is still led by Google, which uses it in both research and production.
The TensorFlow team said the updated code delivers up to 3x faster training performance using mixed precision on Nvidia’s Volta and Turing GPUs, and much faster inference using T4 GPUs – offered by Google Cloud since April and introduced by AWS last month.
The TensorFlow 2.0 release includes an automated script that enables conversion of existing TensorFlow 1.x code.
Google is already using the updated library for the language understanding model within Google News which organizes the news into storylines, and the company says it has “significantly improved” story coverage.
Source at AIBusiness
Comments