Three Key Areas Enterprise AI needs Continued Momentum
by; Mark Stadtmueller, VP, Product Strategy
Enterprise AI, the ability for businesses to create new or better products and services, new or better customer interactions, and new or better ways of doing business from data by leveraging AI, keeps advancing and it’s a very good thing. The more that businesses can leverage AI to their own accord and per their own direction, the threat that the AI transformation capability is concentrated to a few dominating global powerhouses will be diminished.
The innovation of AI has been greatly spurred on due to it’s Open Source nature, AI skill development being baccessible to everyone via MooCs (Massively Open Online Courses), a lot of attention has been placed on Augmented Intelligence (not replacement), and Responsibility/Explainability (not invasive use) has been critical for businesses trying to serve their customers. But, Enterprise AI needs continued progression as well so that more companies can actively leverage it as part of their offerings. In particular these three areas need continued work:
Full Life Cycle Management for Enterprise AI
It’s great that exciting AI advancements are occurring in the open source community, but, for businesses to create value, AI must become a pipeline from data to outcomes. However, there are still too many requirements to stitch together... different capabilities, in order to create flow through the pipeline. In addition, responsibility (the combination of governance, security, compliance, and collaboration) is still an add-on or an afterthought. Also, Big Data handling is still too much of a separate evolution from AI training and AI serving thus Full Life Cycle Management for Enterprise AI needs to be represented within businesses.
Better AI Training Techniques
Both Stochastic Gradient Descent and the art of Hyperparameter Tuning were critical to harnessing the capability of learning and using deep neural networks. But, the compute and intuition lean heavily on “Big Compute” resources and those elusive unicorn notions of “Data Scientists”. Two important advances need to continue. First, “Transfer Learning”, i.e. the ability to leverage pre-trained deep neural networks and apply them to related but different use cases with minimal amounts of retraining, needs to continue to gain prominence. Leaders in MooCs (i.e. fast.ai) are pushing forward with Transfer Learning. In addition, Transfer Learning minimizes the Hyperparameter Tuning and the associated requirement for Automatic Neural Architecture Search. This greatly makes AI more accessible to businesses while minimizing the training and data requirements. Second, alternatives to SGD, like Reservoir Computing approaches, Echo State Networks and Liquid State Machines need to be commercialized. Without the need to train every neuron in a network, faster training with more efficient computing will make AI more accessible to more businesses.
Easier Production Serving
As per the above, most AI research ends with a accurately trained model. Often, the momentum and excitement ends at that point or after serving a simple webpage. Dynamic Learning, model updating, differences in data transformation between training data and data input for inferences, all remain mostly an undocumented challenge and art form. Furthering reproducibility and processes for production serving need to continue as well.
Enterprise AI can drive what everyone wants: having a responsible decentralized AI capability that provides us better services without AI domination by the powerful. Businesses that move forward with Enterprise AI are not only advancing in the digital economy, they are a force for good. But we need to keep pushing: Full Life Cycle Management; Better Training, Easier Production Serving.