Machine Learning is all around us
Artificial Intelligence as a formal research field has been around since 1956. After a few decades of promises and many hopes, we have begun to live permanently and irreversibly with machines endowed with cognitive abilities — something that until recently was only within the reach of human beings. Massive datasets (known as Big Data), innovative algorithms running on ever-faster processors, unlimited storage capacity, and a significant reduction in the cost of technology infrastructure have allowed the development of “smart” applications and devices to become a reality.
The research in Artificial Intelligence (AI) can be subdivided in a number of ways — depending on the techniques used (e.g., expert systems, artificial neural networks, evolutionary computation, fuzzy logic) or on the problems addressed (computer vision, language processing, predictive systems, among others). Currently, one of the most widely used AI techniques for the development of new applications is known as machine learning (or ML). Simply put, ML tries to present as much data as possible to intelligent algorithms, allowing the systems to develop the ability to make recommendations autonomously — often through the use of artificial neural networks.
The perception concerning the potential and relevance of changes in virtually all industries is reflected in the investment increase in ML related startups: according to CB Insights, globally this value went from around $600 million in 2012 to over $5 billion in 2016. It should be noted that these figures do not include research investments made by governments, universities, and corporations around the world.
Technology companies — such as Google, Microsoft, Apple, Facebook, and Amazon — already incorporate smart techniques into their products and are heading toward a future where all of their business lines will have a built-in ML component. No matter the nature of the application — automated real time translation during a call, recommendations of what you want (or will want) to buy online, or the recognition of your voice while interacting with your mobile phone.
One of the great challenges for companies is to define the best way to use this set of new techniques which will contain probabilistic aspects in their answers — in other words, the algorithms will estimate a solution for a given problem and there is no guarantee that it will be, in fact, the best solution. Either the process is robust and reliable, depending on the quality of the implementation and the techniques used, or the results will be detrimental to the financial health of the company in question.
The number of machine learning startup acquisitions has been growing, led by the performance of large technology companies and, more recently, by the participation of other sectors such as automotive, electronics and industrial. Also according to CB Insights, since 2012 more than 200 acquisitions worth billions of dollars have been made. Names such as startup Nervana (whose motto is “Making machines smarter”), acquired by Intel in August 2016 for about $350 million; Turi (named as a tribute to Alan Turing, whom we mentioned last week), bought by Apple for $200 million; or Viv, purchased by Samsung in October 2016 for $215 million, to launch its own personal digital assistant and compete with Apple’s Siri. These transactions clearly indicate that a number of business verticals have felt the need to incorporate “intelligence” into their processes and products.
One of the most interesting transactions occurred in November 2016: the acquisition of Canada’s Bit Stew by General Electric for about $150 million. Bit Stew has developed a platform to integrate and analyze data obtained through “connected” industrial devices. It is precisely this integration, between real world objects and the digital world — also called the Internet of Things — that we will discuss next week. See you then.