Table of Contents
TensorFlow is a python open source library and framework for building machine learning applications. It is a symbolic math toolkit that carries out several operations targeted at deep neural network training and inference using dataflow and differentiable programming. It enables programmers to build machine learning applications utilizing a range of instruments, frameworks, and community assets.
TensorFlow is currently the most well-known deep learning library in the world. All of Google's products incorporate machine learning to enhance the search engine, translation, image captioning, or recommendations. For instance, users can have a faster and more refined search experience with Artificial intelligence while using search engines. If the person types a keyword in the search bar, search engines provides a recommendation about what could be the next word.
How to use TensorFlow in Python [Complete Tutorial]
Also Read: How to Install Selenium WebDriver in Python 3 [Easy Steps]
The leading open-source deep learning framework created and maintained by Google is called TensorFlow. It can be difficult to utilize TensorFlow directly, however the current tf.keras API introduces Keras's simplicity and usability to the TensorFlow project.
TensorFlow Applications
As previously mentioned, TensorFlow is an excellent tool with countless advantages when used correctly. Classification, perception, understanding, discovery, prediction, and production are some of this library's main functions.
In real-time, Google uses TensorFlow to upgrade the services it provides such as Gmail, Google search engine, image captioning, and many more. It uses TensorFlow in various domains about the requirements and its usage. Let’s explore the super amazing applications of TensorFlow:-
a) Image Recognition
When we give image as input to neural network and receive that image with some kind of label as an output. This process is called image recognition. The acquired label will belong to pre-defined class. There can be multiple classes for labeling or just one. If there is only one class, its known as recognition and if there are multiple class recognition, this task is known as “classification”.
Object detection is sub domain of Image classification, where specific instances of objects are identified as belonging to a certain class like humans, cars, or mobiles.
b) Speech Recognition Systems
Speech recognition and voice recognition are two different domains to be used and they should not be confused.
- Speech recognition: used to identify words in spoken language.
- Voice recognition: biometric technology for identifying an individual's voice
Now let’s have look at speech recognition system.
The ability of a machine or program to recognize words spoken aloud and translate them into legible text is known as voice recognition, often known as speech-to-text. Basic voice recognition software can only pick out words and phrases that are uttered clearly and has a small repertoire. More advanced software can handle diverse languages, accents, and natural speech.
Research in computer science, linguistics, and computer engineering are all used in speech recognition. Speech recognition features are built into a lot of contemporary gadgets and text-focused software to facilitate easier or hands-free usage.
c) Voice recognition
TensorFlow is widely used in voice recognition systems for the telecom, mobile, security, and search industries, among others. Without requiring a keyboard or mouse, it employs voice recognition technology to issue commands, carry out activities, and accept input.
TensorFlow-trained automatic voice recognition is used for this. These technologies digitize human voice and translate it into text or computer comprehensible code.
Systems like Bluetooth, virtual assistants, and Google Voice are built on TensorFlow-trained models. TensorFlow's voice recognition method is also used to create customer relationship management (CRM) for client-based systems.
d) Text-based applications
Text messages, responses, comments, tweets, stock outcomes, and other communication tools are data sources. TensorFlow is used to process the data in order to analyze it and generate the anticipated sales.
We accomplish this using a variety of techniques, including sentiment analysis, a bag of words, and others. By decoding the phrases used in texts, this can assist in determining the risk connected to any firm. Additionally, Google employs it for text translation across more than 100 languages from a single language.
History of TensorFlow
Deep learning began to perform better than any other machine learning algorithms a few years ago when enormous amounts of data was given. Google realized that, tensorflow can be used in deep neural networks to improve their services.
- Gmail
- Photo
- Google search engine
To enable collaboration between academics and developers when creating an AI model, they create a framework called Tensorflow. It enables many individuals to use it once it has been built and scaled.
The first stable version debuted in 2017, however it was first made public in late 2015. Under the Apache Open Source license, it is open source. It is available for usage, modification, and redistribution with a price without any payment to Google.
Working of TensorFlow
TensorFlow accepts inputs as a multi-dimensional array called Tensor, allowing you to create dataflow graphs and structures to specify how data goes through a graph. It enables you to create a flowchart of the operations that can be carried out on these inputs, with the output appearing at the other end.
- TensorFlow's first layer consist of device layer and the network layer. In the operating system where TensorFlow will run, the device layer contains the implementation to communicate with the various devices like GPU, CPU, and TPU. While in the Distributable Trainable setup, the network layer contains implementations to connect to many computers utilizing various networking protocols.
- The second layer consist of Kernel implementations for applications mostly utilized in machine learning.
- The third layer consist of dataflow executors and distributed master. Workloads can be divided among many system devices via Distributed Master. The data flow graph is optimally performed by a data flow executor, though.
- The next layer exposes all of the features using an API written in the C programming language. The C programming language was chosen because it is quick, dependable, and independent of operating system.
- Python and C++ clients are supported by the fifth layer.
- The training and inference libraries for TensorFlow are implemented in Python and C++ at the final layer.
Setting Up TensorFlow
a) First, we will check if we have already tensorflow installed in our system or not by using tf._version_ command as shown below.
If it throws an error, then we need to install it by running below command:-
(base) PS C:\Users\HP> pip install tensorflow
Collecting tensorflow
Downloading tensorflow-2.10.0-cp310-cp310-win_amd64.whl (455.9 MB)
---------------------------------------- 455.9/455.9 MB 3.1 MB/s eta 0:00:00
Collecting keras<2.11,>=2.10.0
Downloading keras-2.10.0-py2.py3-none-any.whl (1.7 MB)
---------------------------------------- 1.7/1.7 MB 2.4 MB/s eta 0:00:00
Collecting grpcio<2.0,>=1.24.3
Downloading grpcio-1.49.1-cp310-cp310-win_amd64.whl (3.6 MB)
---------------------------------------- 3.6/3.6 MB 1.2 MB/s eta 0:00:00
Collecting libclang>=13.0.0
Downloading libclang-14.0.6-py2.py3-none-win_amd64.whl (14.2 MB)
---------------------------------------- 14.2/14.2 MB 593.1 kB/s eta 0:00:00
Collecting tensorflow-io-gcs-filesystem>=0.23.1
Downloading tensorflow_io_gcs_filesystem-0.27.0-cp310-cp310-win_amd64.whl (1.5 MB)
---------------------------------------- 1.5/1.5 MB 1.3 MB/s eta 0:00:00
Collecting tensorboard<2.11,>=2.10
Downloading tensorboard-2.10.1-py3-none-any.whl (5.9 MB)
---------------------------------------- 5.9/5.9 MB 1.9 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.20 in c:\users\hp\appdata\local\programs\python\python310\lib\site-packages (from tensorflow) (1.23.3)
Collecting gast<=0.4.0,>=0.2.1
Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting protobuf<3.20,>=3.9.2
Downloading protobuf-3.19.6-cp310-cp310-win_amd64.whl (895 kB)
---------------------------------------- 895.7/895.7 kB 1.1 MB/s eta 0:00:00
Collecting termcolor>=1.1.0
Downloading termcolor-2.0.1-py3-none-any.whl (5.4 kB)
Collecting google-pasta>=0.1.1
Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
---------------------------------------- 57.5/57.5 kB 3.0 MB/s eta 0:00:00
Collecting absl-py>=1.0.0
Downloading absl_py-1.2.0-py3-none-any.whl (123 kB)
---------------------------------------- 123.4/123.4 kB 3.6 MB/s eta 0:00:00
Collecting h5py>=2.9.0
Downloading h5py-3.7.0-cp310-cp310-win_amd64.whl (2.6 MB)
---------------------------------------- 2.6/2.6 MB 2.4 MB/s eta 0:00:00
Collecting tensorflow-estimator<2.11,>=2.10.0
Downloading tensorflow_estimator-2.10.0-py2.py3-none-any.whl (438 kB)
---------------------------------------- 438.7/438.7 kB 2.0 MB/s eta 0:00:00
Collecting flatbuffers>=2.0
Downloading flatbuffers-22.9.24-py2.py3-none-any.whl (26 kB)
Collecting typing-extensions>=3.6.6
Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
.............................................
It may take some time to download all modules of tensorflow, so be patient. After this, we will import tensorflow as tf in our Jupyter notebook to use.
We will load the built in MNIST dataset with tf and keras.
Then, we will convert the sample data from integers to floating-point numbers:
Build a machine learning model
We will build a tf.keras.Sequential model by stacking layers. Let’s see how can we do this:-
For each example, the model returns a vector of logits or log-odds scores, one for each class.
The output of predictions will be :-
Softmax is a mathematical function that converts a vector of numbers into a vector of probabilities, where the probabilities of each value are proportional to the relative scale of each value in the vector. We will convert these logits to probabilities by using tf.nn.softmax function:-
The output will be:-
We can clearly see in above output snippet, softmax scaled the outputs in positive numbers between 0 to 1.
SparseCategoricalCrossentropy metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as sparse categorical accuracy: an idempotent operation that simply divides total by count.
After this we will define a loss function for training using SparseCategoricalCrossentropy, which takes a vector of logits and a True index and returns a scalar loss for each example
This loss is equal to the negative log probability of the true class: The loss is zero if the model is sure of the correct class. This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.math.log(1/10) ~= 2.3.
The loss will be:-
Before starting training, we will configure and compile the model using Keras compile. Set the optimizerclass to adam, set the loss to the loss_fn function you defined earlier, and specify a metric to be evaluated for the model by setting the metrics parameter to accuracy.
Training and Evaluating Our Model
We will use the Model.fit method to adjust your model parameters and minimize the loss:-
You can change epochs to as many as you want. For this time, I am using 8 for understanding. The output will look like below :-
The Model.evaluate method checks the models performance, usually on a "Validation-set" or "Test-set".
The output will be:-
We can see in snippet, we got Accuracy of 98% and loss that is too low is 0.07. Let’s assume we want out model to return a probability, we can wrap the trained model, and attach the softmax to it. Let’s see how we can do this:-
The output will look like below:-
Conclusion
The most well-known deep learning library in recent years is called TensorFlow. Any deep learning structure, such as a CNN, RNN, or basic artificial neural network, can be built by an expert using TensorFlow. Large corporations, startups, and academic institutions primarily use TensorFlow. Nearly all of Google's everyday products, including Gmail, Photos, and the Google Search Engine, employ TensorFlow.
TensorFlow was created by the Google Brain team to bridge the knowledge gap between researchers and product developers. TensorFlow was released to the public in 2015, and interest in it is rising quickly. The deep learning library with the most GitHub repositories today is TensorFlow. TensorFlow is used by professionals since it is simple to scale up. It is designed to operate on mobile platforms like iOS and Android as well as in the cloud.