Thoughts after NVIDIA GTC19, Distribute Business into the AI Pipeline
by; J. Mark Stadtmueller, VP, Product Strategy
GTC in San Jose is a great event. Spring is blooming, March Madness is launching, and the ‘boys of summer' are getting ready to play games that count. And to me, GTC launches the new AI year. GTC brings together a critical mass of the AI industry and becomes a bell weather of where the envelope is now being pushed in AI. So, here is my take on the current focus.
Last year, (GTC2018) was really about scale. By last year, it was clear that AI was allowing the creation of new capabilities that did not exist previously. However, the method to do this, Deep Neural Networks, required scale (large datasets and large models) to accomplish this. In particular, leveraging Dask for parallel Python and Horovod for parallel Tensorflow seemed to be a recurring theme. There were many presentations that had histograms showing the efficiency of distribution and the improved training times and/or accuracy given a fixed training time using these parallel techniques. And to some extent, that focus (distributed AI pipelines) was predominant in 2018.
This year (GTC2019), many presentations now assumed distributed pipeline capabilities. But, to me, a clear theme was putting AI to work. That theme manifested itself in three areas:
Getting business people actively engaged in the AI pipeline: To put AI to work in businesses, business people need to be actively engaged not just in the outcome of AI, but in the process of gaining advantage from AI. They need to understand the relationships of data to AI, non-deterministic decision making (as a great presentation by Amazon described), and opportunities for new products and services. There was a clear input to the many data scientists and machine learning engineers on strategies to get active business unit involvement
Getting AI into production: For AI to continue strong past the hype curve, the AI production use cases needs to continue to expand. So, a lot of next steps are how to leverage AI building blocks such as LSTMs, GAN, RL, to push the envelope of AI production use cases. Covariant gave a great presentation on how to leverage these tools to get robotic capabilities to learn much faster to accomplish industrial use cases.
Dealing with small data: While getting great AI outcomes needed large data and large models (causing the 2018 distributed computing focus), many businesses do not have those large datasets. So, there were some very good presentations around using GANs to create synthetic training data as well as presentations on techniques for dealing with small datasets (transfer learning amongst others). UBS gave a very good presentation on learning outcomes from small datasets to gain outcomes prior to large scale funding and deployment.
So, another great GTC and a great evolution of the AI industry. After you build an engine, that engine needs to be put to work, and I think the focus of GTC2019 was doing just that.