Going with the TensorFlow… What’s Next for ML?

Kira Colburn
EnterpriseTechLondon
4 min readAug 31, 2020

--

Here in New York City, Work-Bench has hosted the New York Enterprise Technology Meetup for 8+ years to help build the NYC enterprise tech community. Across the pond in London, our good friend Ian Ellis has also been building up a strong enterprise community with the London Enterprise Technology Meetup and we recently had the chance to combine forces to talk about the future of ML.

Kelley Mak, Principal at Work-Bench, moderated a panel to discuss the latest research and evolution in the space, joined by Kemal El Moujahid, Head of Product for TensorFlow at Google, and Professor Stephen Roberts, Professor of ML at the University of Oxford.

Machine learning is changing everything in today’s world and Kemal even compared it to being “similar in magnitude to the first industrial revolution.” But where there’s rapid change, there needs to be a foundation of research. As an academic, Stephen believes “mathematics underpins everything in ML” and that theory and methodology helps turn the unknowns in ML into concrete, proven processes. While we couldn’t cover the panelists’ 30+ years of research and development in an hour-long panel (there’s just too much), here are our top takeaways:

1. Ethics should be embedded in every AI / ML strategy

One common theme throughout the discussion was the growing importance of AI ethics. Not only does AI have the ability to make decisions that change lives for better or worse, but mishandling personal information or unfair AI outcomes can cost large enterprises hundreds of millions of dollars per year (usually in a public scandal). It’s the AI community’s responsibility as a whole to be thought leaders in the ethics of information and data analytics, and uphold standards to ensure AI models run without biases. To do this, AI ethics need to be centrally embedded into the AI modelling research stream, not serve as an afterthought.

According to AI model monitoring and explainability platform Arthur, a Work-Bench portfolio company and one of the demos during the webinar, the core of a healthy, holistic AI stack stems from three essential pillars of model monitoring:

  1. Performance — proactively identifying and retraining impacted models in real-time to detect data accuracy drift, including quality issues, exposure in new markets, etc.
  2. Explainability — Using explainability to inform decisions made by black box models to make them more transparent and auditable
  3. Bias Detection — Looking at model outcomes that may be unintentionally engaging in illegal or discriminatory behaviour that make consumer-affecting decisions by aggregating inference data and configuring thresholds to surface

2. Democratization is one solution to ML’s scaling problem

“How do you compute at scale?” continues to be a question for companies operating without the budget and data resources of enterprise giants like Google, Amazon and Microsoft. One of the biggest problems organizations face today is the difficulty getting a hold of the right data. Oftentimes large, available data sets are stale (months old) and using synthetic data isn’t necessarily a better option given, as Stephen explained, Information Theory states that if you’re creating synthetic data by using samples from an existing model you’re not gaining actual, new knowledge. HumanLoop, another demo during the webinar, is tackling the pain points of data gathering and model deployment with it’s platform that annotates, trains and deploys NLP with 10x less labelled data.

However, every year ML’s ability to scale gets bigger and bigger as new numerical analytics that allow us to deal with more data arise. One reason for this is the increasing democratization of ML, which can be broken down into two meanings. The first being more big platforms share and open source their tech. For example, Google runs a program called TensorFlow Research Cloud, which shares ML resources around the world, for free. The second being simplification of tech. We’re seeing a shift to a higher level of abstraction and we’ve seen this trend before with programming — first people learned assembly code, then a higher level language appeared and they learned to code that, then Excel came on the scene allowing people who couldn’t code to simply punch in numbers and get the same outcomes.

3. AutoML is going to be a game changer

The idea of using data to predict future outcomes is far from new, but AutoML takes this idea a step further and covers the complete ML pipeline — from the raw dataset to the deployable ML learning model. AutoML allows data science teams to accelerate the testing and training of ML algorithms and accelerate the development of predictive algorithms. More simply put, AutoML allows AI to figure out the right structure for itself.

If you believe in the exponential progression of tech, like Kemal does, then you’ll see AutoML as transformational tech in the way that it will provide transfer learning / knowledge between tasks, helping reduce the need for massive amounts of data for solving problems. AutoML will be the next step in submitting data to AI to figure out the best prediction, shifting the real expertise to how to formulate the business problem and how / where to embed ML into your processes.

4. Enterprise adoption accelerates at the intersection of business & tech lines

When you look at the biggest barriers to ML in the enterprise, it’s often attributed to being able to extract enough data to build models, as well as finding and hiring technical talent. To fix this, enterprises need to “de-silo” their technical and business talent, and look for entrepreneurial thinkers who can better manage questions like, “What problem am I trying to solve for? Is deep learning useful for this?” Oftentimes, once you’ve defined the business use case, old methods (not ML or AI) can be used at a fraction of the cost to get the job done.

Sign up for the Work-Bench Enterprise Weekly Newsletter to follow the latest from our team and get the top enterprise tech news, funding updates, events, and more.

--

--

Kira Colburn
EnterpriseTechLondon

Head of Content at Work-Bench, leading the firm’s content vision, strategy, and production!