My youthful understanding of AI has broadened quite a bit since then (I won’t be building robots anytime soon!), and I’m engaging with AI and ML initiatives at my company every day. This got me thinking: How can I set up our AI and ML initiatives to run as smoothly as these fictional robots?
First step: ensuring that data across the organization runs off of the same core math, whether it be simple flags or basic math or sophisticated algorithms. These inputs form the core of AI and ML models. Without consistent data and math, it will be very hard to drive to consistent results, not to mention run as smoothly as C3PO.
I’ve noticed that my colleagues and I often run the same analytic, trying to achieve the same result using the same definition for different projects. But inevitably we get slightly different results. This creates confusion and frustration for our team, forcing us to waste hours trying to reconcile the differences and find the root cause of the inconsistencies. Ultimately, we end up massaging the data so that the results appear consistent. This may solve the immediate problem, but it raises questions about the integrity of our analytic outcomes and makes it harder to trust the results. And around and around we go.
Like most companies these days, we’re juggling several data sources, applications, infrastructures, and teams, trying to keep our analytic outcomes consistent. It’s already complex and hard, and it’s getting worse. Some days it makes my head spin just thinking about how to solve for it. We’ve tried throwing more technology resources at the initiative, but we found this just creates additional silos—the math becomes trapped in the new technology.
I stepped back for a minute and thought about this from an individual perspective. I realized every piece of code I write has value to someone else in the future, even my future self. My personal challenge today is that I’m not as organized as I’d like, and it’s hard to reuse even my own code over and over again (not to mention sharing anyone else’s), so I just re-write it.
Looking at this from an enterprise perspective, I know that lots of other analysts don’t stay organized either—I’m definitely in good (or is it bad?) company. This lack of organization and collaboration locks information into pockets, and those pockets leak when people leave or quit. The business risk, needless to say, is pretty huge.
So how can teams start seeing all the analytic code? Could this be done on a personal level? I realized that the only way to get to consistency is to focus on the ability to find and discover the existing analytic—in other words, to reuse previous work instead of writing it again, and again, and again…
But what if I create something that I’m not ready for others to use, or they don’t have permission to use it? As much as I love my enterprise colleagues, sometimes I (and my code) need a little privacy. So, this solution needs to be able to lock down my work and only provide it to those who are ready to take advantage of it.
My final realization? Managing and governing analytics as assets enables sharing and reuse of the same math.
See everything. Use everywhere. Trust every time. These are the real secrets to unlock the data and math silos in large enterprises.
Curious about where others are on their silo-busting journey and what obstacles they face in achieving AI and ML? Check our research paper where we partnered up with WBR Research.