Moving Below the Surface (3): TensorFlow — William

Tensorflow is one of the most widely used programming frameworks for algorithms with a large number of mathematical operations and computations. Specifically, Tensorflow is designed for the algorithms of Machine Learning. Tensorflow was first developed by Google and its source code soon became available on Github, the largest open-source code sharing website. Google uses this library in almost its all Machine Learning applications. From Google photos to Google Voice, we have all been using Tensorflow directly or indirectly, while a fast-growing group of independent developers incorporates Tensorflow into their own software. Tensorflow is able to run on large clusters of computing hardware and its excellence in perceptual tasks gives it an edge to Tensorflow in competitions against other Machine Learning libraries.

In this blog, we will explore the conceptual structure of Tensorflow. Although Tensorflow is mostly used along with the programming language Python, only fundamental knowledge of computer science is needed for you to proceed further in this blog. As its name suggests, Tensorflow comprises two core components: the Tensors and the computational graph (or “the flow”). Let me briefly introduce each of them.

Mathematically speaking, a Tensor is an N-Dimensional vector representing a set of data in the N-Dimensional space. In other words, a Tensor includes a group of points in a coordinate with N axes. It is difficult to visualize points in high dimensions, but the following examples in two or three dimensions give a good idea of how Tensors look like.

As the dimension increases, the volume of data represented grows exponentially. For example, a Tensor with form (3,3) is a matrix with 3 rows and 3 columns, while a Tensor with form (6,7,8) is a set of 6 matrices with 7 rows and 8 columns. In these cases, the form (3,3) and the form (6,7,8) are called the shape or the Dimension of the Tensor. In Tensorflow, the Tensors could be either a constant with fixed values, or a variable allowing alternations during computations.

After we understand what Tensor means, it’s time to go with the Flow. The Flow refers to a computational graph or a graph in short. Such graphs are always acyclic, have a distinct input and output, and never feed back into itself. Each node in the graph represents a single mathematical operation. It could be an addition, a multiplication, etc. Data and numbers flow from one node to the next in the form of Tensors, and the result is a new Tensor. The following is a simple computational graph.

Screen Shot 2018-01-05 at 22.05.07.png

The expression of this graph is not complicated: e = (a+b)*(b+1). Let’s start from the bottom of the graph. The nodes at the lowest level of the graph are called leaves. The leaves of the graph do not accept inputs and only provides a Tensor as output. Actually, a Tensor would not be in a non-leaf node for this reason. The three leaves are variables a and b, and a constant 1.

One level up is two operation nodes. Each one of them represents an addition. Both take two inputs from the nodes below. These middle and higher levels depend on their predecessors, for they could not be computed without the outputs from a, b, or 1. Note that both addition operations are parallel to each other at the same level: Tensorflow does not need to wait on all of them to complete before moving on to the next node.

The final node is a multiplication node. It take c and d as input, forming the expression e = (c)*(d), while c = a+b and d = b+1. Therefore, combining the two expressions, we have the final result of e = (a+b)*(b+1).

That is all for our introduction to basic Tensorflow concepts. We will discuss further advanced features of Tensorflow in later posts. Stay tuned and see you next time!

Works Cited

“TensorFlow open source machine intelligence library makes its way to Windows.” On MSFT, 29 Nov. 2016, http://www.onmsft.com/news/tensorflow-open-source-machine-intelligence-library-makes-its-way-to-windows.
HN, Narasimha Prasanna. “A beginner introduction to TensorFlow (Part-1) – Towards Data Science.” Towards Data Science, Towards Data Science, 28 Oct. 2017, towardsdatascience.com/a-beginner-introduction-to-tensorflow-part-1-6d139e038278.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s